00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2459 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3720 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.121 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.122 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.180 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.387 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.399 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.412 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.412 > git config core.sparsecheckout # timeout=10 00:00:07.425 > git read-tree -mu HEAD # timeout=10 00:00:07.443 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.466 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.466 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.549 [Pipeline] Start of Pipeline 00:00:07.559 [Pipeline] library 00:00:07.560 Loading library shm_lib@master 00:00:07.560 Library shm_lib@master is cached. Copying from home. 00:00:07.573 [Pipeline] node 00:00:07.584 Running on WFP37 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.585 [Pipeline] { 00:00:07.594 [Pipeline] catchError 00:00:07.595 [Pipeline] { 00:00:07.606 [Pipeline] wrap 00:00:07.614 [Pipeline] { 00:00:07.620 [Pipeline] stage 00:00:07.621 [Pipeline] { (Prologue) 00:00:07.825 [Pipeline] sh 00:00:08.113 + logger -p user.info -t JENKINS-CI 00:00:08.129 [Pipeline] echo 00:00:08.130 Node: WFP37 00:00:08.137 [Pipeline] sh 00:00:08.460 [Pipeline] setCustomBuildProperty 00:00:08.472 [Pipeline] echo 00:00:08.474 Cleanup processes 00:00:08.479 [Pipeline] sh 00:00:08.764 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.764 1338876 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.776 [Pipeline] sh 00:00:09.060 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.060 ++ grep -v 'sudo pgrep' 00:00:09.060 ++ awk '{print $1}' 00:00:09.060 + sudo kill -9 00:00:09.060 + true 00:00:09.074 [Pipeline] cleanWs 00:00:09.083 [WS-CLEANUP] Deleting project workspace... 00:00:09.083 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.090 [WS-CLEANUP] done 00:00:09.095 [Pipeline] setCustomBuildProperty 00:00:09.108 [Pipeline] sh 00:00:09.389 + sudo git config --global --replace-all safe.directory '*' 00:00:09.487 [Pipeline] httpRequest 00:00:10.035 [Pipeline] echo 00:00:10.036 Sorcerer 10.211.164.20 is alive 00:00:10.045 [Pipeline] retry 00:00:10.047 [Pipeline] { 00:00:10.059 [Pipeline] httpRequest 00:00:10.063 HttpMethod: GET 00:00:10.064 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.065 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.087 Response Code: HTTP/1.1 200 OK 00:00:10.088 Success: Status code 200 is in the accepted range: 200,404 00:00:10.088 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.677 [Pipeline] } 00:00:27.694 [Pipeline] // retry 00:00:27.701 [Pipeline] sh 00:00:27.985 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.000 [Pipeline] httpRequest 00:00:28.403 [Pipeline] echo 00:00:28.405 Sorcerer 10.211.164.20 is alive 00:00:28.415 [Pipeline] retry 00:00:28.418 [Pipeline] { 00:00:28.432 [Pipeline] httpRequest 00:00:28.437 HttpMethod: GET 00:00:28.437 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:28.438 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:28.458 Response Code: HTTP/1.1 200 OK 00:00:28.458 Success: Status code 200 is in the accepted range: 200,404 00:00:28.458 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:13.384 [Pipeline] } 00:01:13.401 [Pipeline] // retry 00:01:13.409 [Pipeline] sh 00:01:13.693 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:15.611 [Pipeline] sh 00:01:15.895 + git -C spdk log --oneline -n5 00:01:15.895 c13c99a5e test: Various fixes for Fedora40 00:01:15.895 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:15.895 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:15.895 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:15.895 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:15.906 [Pipeline] } 00:01:15.920 [Pipeline] // stage 00:01:15.928 [Pipeline] stage 00:01:15.930 [Pipeline] { (Prepare) 00:01:15.947 [Pipeline] writeFile 00:01:15.962 [Pipeline] sh 00:01:16.246 + logger -p user.info -t JENKINS-CI 00:01:16.258 [Pipeline] sh 00:01:16.541 + logger -p user.info -t JENKINS-CI 00:01:16.553 [Pipeline] sh 00:01:16.837 + cat autorun-spdk.conf 00:01:16.837 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.837 SPDK_TEST_NVMF=1 00:01:16.837 SPDK_TEST_NVME_CLI=1 00:01:16.837 SPDK_TEST_NVMF_NICS=mlx5 00:01:16.837 SPDK_RUN_UBSAN=1 00:01:16.837 NET_TYPE=phy 00:01:16.844 RUN_NIGHTLY=1 00:01:16.848 [Pipeline] readFile 00:01:16.871 [Pipeline] withEnv 00:01:16.873 [Pipeline] { 00:01:16.885 [Pipeline] sh 00:01:17.169 + set -ex 00:01:17.169 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:17.169 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:17.169 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.169 ++ SPDK_TEST_NVMF=1 00:01:17.169 ++ SPDK_TEST_NVME_CLI=1 00:01:17.169 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:17.169 ++ SPDK_RUN_UBSAN=1 00:01:17.169 ++ NET_TYPE=phy 00:01:17.169 ++ RUN_NIGHTLY=1 00:01:17.169 + case $SPDK_TEST_NVMF_NICS in 00:01:17.169 + DRIVERS=mlx5_ib 00:01:17.169 + [[ -n mlx5_ib ]] 00:01:17.169 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.169 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:23.740 rmmod: ERROR: Module irdma is not currently loaded 00:01:23.740 rmmod: ERROR: Module i40iw is not currently loaded 00:01:23.740 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:23.740 + true 00:01:23.740 + for D in $DRIVERS 00:01:23.740 + sudo modprobe mlx5_ib 00:01:23.740 + exit 0 00:01:23.750 [Pipeline] } 00:01:23.765 [Pipeline] // withEnv 00:01:23.770 [Pipeline] } 00:01:23.783 [Pipeline] // stage 00:01:23.792 [Pipeline] catchError 00:01:23.794 [Pipeline] { 00:01:23.808 [Pipeline] timeout 00:01:23.808 Timeout set to expire in 1 hr 0 min 00:01:23.809 [Pipeline] { 00:01:23.823 [Pipeline] stage 00:01:23.825 [Pipeline] { (Tests) 00:01:23.839 [Pipeline] sh 00:01:24.124 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:24.124 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:24.124 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:24.124 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:24.124 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:24.124 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:24.124 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:24.124 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:24.124 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:24.124 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:24.124 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:24.124 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:24.124 + source /etc/os-release 00:01:24.124 ++ NAME='Fedora Linux' 00:01:24.124 ++ VERSION='39 (Cloud Edition)' 00:01:24.124 ++ ID=fedora 00:01:24.124 ++ VERSION_ID=39 00:01:24.124 ++ VERSION_CODENAME= 00:01:24.124 ++ PLATFORM_ID=platform:f39 00:01:24.124 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:24.124 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.124 ++ LOGO=fedora-logo-icon 00:01:24.124 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:24.124 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.124 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:24.124 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.124 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.124 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.124 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:24.124 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.124 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:24.124 ++ SUPPORT_END=2024-11-12 00:01:24.124 ++ VARIANT='Cloud Edition' 00:01:24.124 ++ VARIANT_ID=cloud 00:01:24.124 + uname -a 00:01:24.124 Linux spdk-wfp-37 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:24.124 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:26.658 Hugepages 00:01:26.658 node hugesize free / total 00:01:26.658 node0 1048576kB 0 / 0 00:01:26.658 node0 2048kB 0 / 0 00:01:26.658 node1 1048576kB 0 / 0 00:01:26.658 node1 2048kB 0 / 0 00:01:26.658 00:01:26.658 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.658 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:26.658 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:26.658 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:26.658 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:26.658 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:26.659 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:26.659 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:26.659 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:26.659 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:26.659 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:26.659 + rm -f /tmp/spdk-ld-path 00:01:26.659 + source autorun-spdk.conf 00:01:26.659 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.659 ++ SPDK_TEST_NVMF=1 00:01:26.659 ++ SPDK_TEST_NVME_CLI=1 00:01:26.659 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:26.659 ++ SPDK_RUN_UBSAN=1 00:01:26.659 ++ NET_TYPE=phy 00:01:26.659 ++ RUN_NIGHTLY=1 00:01:26.659 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.659 + [[ -n '' ]] 00:01:26.659 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:26.659 + for M in /var/spdk/build-*-manifest.txt 00:01:26.659 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.659 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:26.659 + for M in /var/spdk/build-*-manifest.txt 00:01:26.659 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.659 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:26.659 + for M in /var/spdk/build-*-manifest.txt 00:01:26.659 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.659 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:26.659 ++ uname 00:01:26.659 + [[ Linux == \L\i\n\u\x ]] 00:01:26.659 + sudo dmesg -T 00:01:26.659 + sudo dmesg --clear 00:01:26.659 + dmesg_pid=1339826 00:01:26.659 + [[ Fedora Linux == FreeBSD ]] 00:01:26.659 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.659 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.659 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.659 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:26.659 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:26.659 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.659 + sudo dmesg -Tw 00:01:26.659 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.659 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.659 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.659 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.659 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.659 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.659 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.659 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.659 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.659 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.659 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:26.659 Test configuration: 00:01:26.659 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.659 SPDK_TEST_NVMF=1 00:01:26.659 SPDK_TEST_NVME_CLI=1 00:01:26.659 SPDK_TEST_NVMF_NICS=mlx5 00:01:26.659 SPDK_RUN_UBSAN=1 00:01:26.659 NET_TYPE=phy 00:01:26.659 RUN_NIGHTLY=1 10:55:47 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:26.659 10:55:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:26.659 10:55:47 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.659 10:55:47 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.659 10:55:47 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.659 10:55:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.659 10:55:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.659 10:55:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.659 10:55:47 -- paths/export.sh@5 -- $ export PATH 00:01:26.659 10:55:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.659 10:55:47 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:26.659 10:55:47 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:26.659 10:55:47 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734083747.XXXXXX 00:01:26.659 10:55:47 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734083747.pwqHIm 00:01:26.659 10:55:47 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:26.659 10:55:47 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:26.659 10:55:47 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:26.659 10:55:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:26.659 10:55:47 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.659 10:55:47 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:26.659 10:55:47 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:26.659 10:55:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.659 10:55:47 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:26.659 10:55:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.659 10:55:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.659 10:55:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:26.659 10:55:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.659 Fri Dec 13 09:55:47 AM UTC 2024 00:01:26.659 10:55:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.659 LTS-67-gc13c99a5e 00:01:26.659 10:55:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:26.659 10:55:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.659 10:55:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.659 10:55:47 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:26.659 10:55:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:26.659 10:55:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.659 ************************************ 00:01:26.659 START TEST ubsan 00:01:26.659 ************************************ 00:01:26.659 10:55:47 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:26.659 using ubsan 00:01:26.659 00:01:26.659 real 0m0.000s 00:01:26.659 user 0m0.000s 00:01:26.659 sys 0m0.000s 00:01:26.659 10:55:47 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:26.659 10:55:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.659 ************************************ 00:01:26.659 END TEST ubsan 00:01:26.659 ************************************ 00:01:26.659 10:55:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.659 10:55:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.659 10:55:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.659 10:55:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.659 10:55:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.659 10:55:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.659 10:55:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.659 10:55:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.659 10:55:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:26.659 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:26.659 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:27.227 Using 'verbs' RDMA provider 00:01:39.704 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:51.920 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:51.920 Creating mk/config.mk...done. 00:01:51.920 Creating mk/cc.flags.mk...done. 00:01:51.920 Type 'make' to build. 00:01:51.920 10:56:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:51.920 10:56:10 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:51.920 10:56:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:51.920 10:56:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.920 ************************************ 00:01:51.920 START TEST make 00:01:51.920 ************************************ 00:01:51.920 10:56:10 -- common/autotest_common.sh@1114 -- $ make -j112 00:01:51.920 make[1]: Nothing to be done for 'all'. 00:01:57.195 The Meson build system 00:01:57.195 Version: 1.5.0 00:01:57.195 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:57.196 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:57.196 Build type: native build 00:01:57.196 Program cat found: YES (/usr/bin/cat) 00:01:57.196 Project name: DPDK 00:01:57.196 Project version: 23.11.0 00:01:57.196 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.196 C linker for the host machine: cc ld.bfd 2.40-14 00:01:57.196 Host machine cpu family: x86_64 00:01:57.196 Host machine cpu: x86_64 00:01:57.196 Message: ## Building in Developer Mode ## 00:01:57.196 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.196 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.196 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.196 Program python3 found: YES (/usr/bin/python3) 00:01:57.196 Program cat found: YES (/usr/bin/cat) 00:01:57.196 Compiler for C supports arguments -march=native: YES 00:01:57.196 Checking for size of "void *" : 8 00:01:57.196 Checking for size of "void *" : 8 (cached) 00:01:57.196 Library m found: YES 00:01:57.196 Library numa found: YES 00:01:57.196 Has header "numaif.h" : YES 00:01:57.196 Library fdt found: NO 00:01:57.196 Library execinfo found: NO 00:01:57.196 Has header "execinfo.h" : YES 00:01:57.196 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.196 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.196 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.196 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.196 Run-time dependency openssl found: YES 3.1.1 00:01:57.196 Run-time dependency libpcap found: YES 1.10.4 00:01:57.196 Has header "pcap.h" with dependency libpcap: YES 00:01:57.196 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.196 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.196 Compiler for C supports arguments -Wformat: YES 00:01:57.196 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.196 Compiler for C supports arguments -Wformat-security: NO 00:01:57.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.196 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.196 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.196 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.196 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.196 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.196 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.196 Compiler for C supports arguments -Wundef: YES 00:01:57.196 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.196 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.196 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.196 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.196 Program objdump found: YES (/usr/bin/objdump) 00:01:57.196 Compiler for C supports arguments -mavx512f: YES 00:01:57.196 Checking if "AVX512 checking" compiles: YES 00:01:57.196 Fetching value of define "__SSE4_2__" : 1 00:01:57.196 Fetching value of define "__AES__" : 1 00:01:57.196 Fetching value of define "__AVX__" : 1 00:01:57.196 Fetching value of define "__AVX2__" : 1 00:01:57.196 Fetching value of define "__AVX512BW__" : 1 00:01:57.196 Fetching value of define "__AVX512CD__" : 1 00:01:57.196 Fetching value of define "__AVX512DQ__" : 1 00:01:57.196 Fetching value of define "__AVX512F__" : 1 00:01:57.196 Fetching value of define "__AVX512VL__" : 1 00:01:57.196 Fetching value of define "__PCLMUL__" : 1 00:01:57.196 Fetching value of define "__RDRND__" : 1 00:01:57.196 Fetching value of define "__RDSEED__" : 1 00:01:57.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.196 Fetching value of define "__znver1__" : (undefined) 00:01:57.196 Fetching value of define "__znver2__" : (undefined) 00:01:57.196 Fetching value of define "__znver3__" : (undefined) 00:01:57.196 Fetching value of define "__znver4__" : (undefined) 00:01:57.196 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.196 Message: lib/log: Defining dependency "log" 00:01:57.196 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.196 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.196 Checking for function "getentropy" : NO 00:01:57.196 Message: lib/eal: Defining dependency "eal" 00:01:57.196 Message: lib/ring: Defining dependency "ring" 00:01:57.196 Message: lib/rcu: Defining dependency "rcu" 00:01:57.196 Message: lib/mempool: Defining dependency "mempool" 00:01:57.196 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.196 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.196 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.196 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.196 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:57.196 Compiler for C supports arguments -mpclmul: YES 00:01:57.196 Compiler for C supports arguments -maes: YES 00:01:57.196 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.196 Compiler for C supports arguments -mavx512bw: YES 00:01:57.196 Compiler for C supports arguments -mavx512dq: YES 00:01:57.196 Compiler for C supports arguments -mavx512vl: YES 00:01:57.196 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.196 Compiler for C supports arguments -mavx2: YES 00:01:57.196 Compiler for C supports arguments -mavx: YES 00:01:57.196 Message: lib/net: Defining dependency "net" 00:01:57.196 Message: lib/meter: Defining dependency "meter" 00:01:57.196 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.196 Message: lib/pci: Defining dependency "pci" 00:01:57.196 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.196 Message: lib/hash: Defining dependency "hash" 00:01:57.196 Message: lib/timer: Defining dependency "timer" 00:01:57.196 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.196 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.196 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.196 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.196 Message: lib/power: Defining dependency "power" 00:01:57.196 Message: lib/reorder: Defining dependency "reorder" 00:01:57.196 Message: lib/security: Defining dependency "security" 00:01:57.196 Has header "linux/userfaultfd.h" : YES 00:01:57.196 Has header "linux/vduse.h" : YES 00:01:57.196 Message: lib/vhost: Defining dependency "vhost" 00:01:57.196 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.196 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.196 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.196 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.196 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.196 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.196 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.196 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.196 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.196 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.196 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.196 Configuring doxy-api-html.conf using configuration 00:01:57.196 Configuring doxy-api-man.conf using configuration 00:01:57.196 Program mandb found: YES (/usr/bin/mandb) 00:01:57.196 Program sphinx-build found: NO 00:01:57.196 Configuring rte_build_config.h using configuration 00:01:57.196 Message: 00:01:57.196 ================= 00:01:57.196 Applications Enabled 00:01:57.196 ================= 00:01:57.196 00:01:57.196 apps: 00:01:57.196 00:01:57.196 00:01:57.196 Message: 00:01:57.196 ================= 00:01:57.196 Libraries Enabled 00:01:57.196 ================= 00:01:57.196 00:01:57.196 libs: 00:01:57.196 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.196 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.196 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.196 00:01:57.196 Message: 00:01:57.196 =============== 00:01:57.196 Drivers Enabled 00:01:57.196 =============== 00:01:57.196 00:01:57.196 common: 00:01:57.196 00:01:57.196 bus: 00:01:57.196 pci, vdev, 00:01:57.196 mempool: 00:01:57.196 ring, 00:01:57.196 dma: 00:01:57.196 00:01:57.196 net: 00:01:57.196 00:01:57.196 crypto: 00:01:57.196 00:01:57.196 compress: 00:01:57.196 00:01:57.196 vdpa: 00:01:57.196 00:01:57.196 00:01:57.196 Message: 00:01:57.196 ================= 00:01:57.196 Content Skipped 00:01:57.196 ================= 00:01:57.196 00:01:57.196 apps: 00:01:57.196 dumpcap: explicitly disabled via build config 00:01:57.196 graph: explicitly disabled via build config 00:01:57.196 pdump: explicitly disabled via build config 00:01:57.196 proc-info: explicitly disabled via build config 00:01:57.196 test-acl: explicitly disabled via build config 00:01:57.196 test-bbdev: explicitly disabled via build config 00:01:57.196 test-cmdline: explicitly disabled via build config 00:01:57.196 test-compress-perf: explicitly disabled via build config 00:01:57.196 test-crypto-perf: explicitly disabled via build config 00:01:57.196 test-dma-perf: explicitly disabled via build config 00:01:57.196 test-eventdev: explicitly disabled via build config 00:01:57.196 test-fib: explicitly disabled via build config 00:01:57.196 test-flow-perf: explicitly disabled via build config 00:01:57.196 test-gpudev: explicitly disabled via build config 00:01:57.196 test-mldev: explicitly disabled via build config 00:01:57.196 test-pipeline: explicitly disabled via build config 00:01:57.196 test-pmd: explicitly disabled via build config 00:01:57.196 test-regex: explicitly disabled via build config 00:01:57.196 test-sad: explicitly disabled via build config 00:01:57.196 test-security-perf: explicitly disabled via build config 00:01:57.196 00:01:57.196 libs: 00:01:57.196 metrics: explicitly disabled via build config 00:01:57.196 acl: explicitly disabled via build config 00:01:57.196 bbdev: explicitly disabled via build config 00:01:57.196 bitratestats: explicitly disabled via build config 00:01:57.196 bpf: explicitly disabled via build config 00:01:57.196 cfgfile: explicitly disabled via build config 00:01:57.196 distributor: explicitly disabled via build config 00:01:57.196 efd: explicitly disabled via build config 00:01:57.197 eventdev: explicitly disabled via build config 00:01:57.197 dispatcher: explicitly disabled via build config 00:01:57.197 gpudev: explicitly disabled via build config 00:01:57.197 gro: explicitly disabled via build config 00:01:57.197 gso: explicitly disabled via build config 00:01:57.197 ip_frag: explicitly disabled via build config 00:01:57.197 jobstats: explicitly disabled via build config 00:01:57.197 latencystats: explicitly disabled via build config 00:01:57.197 lpm: explicitly disabled via build config 00:01:57.197 member: explicitly disabled via build config 00:01:57.197 pcapng: explicitly disabled via build config 00:01:57.197 rawdev: explicitly disabled via build config 00:01:57.197 regexdev: explicitly disabled via build config 00:01:57.197 mldev: explicitly disabled via build config 00:01:57.197 rib: explicitly disabled via build config 00:01:57.197 sched: explicitly disabled via build config 00:01:57.197 stack: explicitly disabled via build config 00:01:57.197 ipsec: explicitly disabled via build config 00:01:57.197 pdcp: explicitly disabled via build config 00:01:57.197 fib: explicitly disabled via build config 00:01:57.197 port: explicitly disabled via build config 00:01:57.197 pdump: explicitly disabled via build config 00:01:57.197 table: explicitly disabled via build config 00:01:57.197 pipeline: explicitly disabled via build config 00:01:57.197 graph: explicitly disabled via build config 00:01:57.197 node: explicitly disabled via build config 00:01:57.197 00:01:57.197 drivers: 00:01:57.197 common/cpt: not in enabled drivers build config 00:01:57.197 common/dpaax: not in enabled drivers build config 00:01:57.197 common/iavf: not in enabled drivers build config 00:01:57.197 common/idpf: not in enabled drivers build config 00:01:57.197 common/mvep: not in enabled drivers build config 00:01:57.197 common/octeontx: not in enabled drivers build config 00:01:57.197 bus/auxiliary: not in enabled drivers build config 00:01:57.197 bus/cdx: not in enabled drivers build config 00:01:57.197 bus/dpaa: not in enabled drivers build config 00:01:57.197 bus/fslmc: not in enabled drivers build config 00:01:57.197 bus/ifpga: not in enabled drivers build config 00:01:57.197 bus/platform: not in enabled drivers build config 00:01:57.197 bus/vmbus: not in enabled drivers build config 00:01:57.197 common/cnxk: not in enabled drivers build config 00:01:57.197 common/mlx5: not in enabled drivers build config 00:01:57.197 common/nfp: not in enabled drivers build config 00:01:57.197 common/qat: not in enabled drivers build config 00:01:57.197 common/sfc_efx: not in enabled drivers build config 00:01:57.197 mempool/bucket: not in enabled drivers build config 00:01:57.197 mempool/cnxk: not in enabled drivers build config 00:01:57.197 mempool/dpaa: not in enabled drivers build config 00:01:57.197 mempool/dpaa2: not in enabled drivers build config 00:01:57.197 mempool/octeontx: not in enabled drivers build config 00:01:57.197 mempool/stack: not in enabled drivers build config 00:01:57.197 dma/cnxk: not in enabled drivers build config 00:01:57.197 dma/dpaa: not in enabled drivers build config 00:01:57.197 dma/dpaa2: not in enabled drivers build config 00:01:57.197 dma/hisilicon: not in enabled drivers build config 00:01:57.197 dma/idxd: not in enabled drivers build config 00:01:57.197 dma/ioat: not in enabled drivers build config 00:01:57.197 dma/skeleton: not in enabled drivers build config 00:01:57.197 net/af_packet: not in enabled drivers build config 00:01:57.197 net/af_xdp: not in enabled drivers build config 00:01:57.197 net/ark: not in enabled drivers build config 00:01:57.197 net/atlantic: not in enabled drivers build config 00:01:57.197 net/avp: not in enabled drivers build config 00:01:57.197 net/axgbe: not in enabled drivers build config 00:01:57.197 net/bnx2x: not in enabled drivers build config 00:01:57.197 net/bnxt: not in enabled drivers build config 00:01:57.197 net/bonding: not in enabled drivers build config 00:01:57.197 net/cnxk: not in enabled drivers build config 00:01:57.197 net/cpfl: not in enabled drivers build config 00:01:57.197 net/cxgbe: not in enabled drivers build config 00:01:57.197 net/dpaa: not in enabled drivers build config 00:01:57.197 net/dpaa2: not in enabled drivers build config 00:01:57.197 net/e1000: not in enabled drivers build config 00:01:57.197 net/ena: not in enabled drivers build config 00:01:57.197 net/enetc: not in enabled drivers build config 00:01:57.197 net/enetfec: not in enabled drivers build config 00:01:57.197 net/enic: not in enabled drivers build config 00:01:57.197 net/failsafe: not in enabled drivers build config 00:01:57.197 net/fm10k: not in enabled drivers build config 00:01:57.197 net/gve: not in enabled drivers build config 00:01:57.197 net/hinic: not in enabled drivers build config 00:01:57.197 net/hns3: not in enabled drivers build config 00:01:57.197 net/i40e: not in enabled drivers build config 00:01:57.197 net/iavf: not in enabled drivers build config 00:01:57.197 net/ice: not in enabled drivers build config 00:01:57.197 net/idpf: not in enabled drivers build config 00:01:57.197 net/igc: not in enabled drivers build config 00:01:57.197 net/ionic: not in enabled drivers build config 00:01:57.197 net/ipn3ke: not in enabled drivers build config 00:01:57.197 net/ixgbe: not in enabled drivers build config 00:01:57.197 net/mana: not in enabled drivers build config 00:01:57.197 net/memif: not in enabled drivers build config 00:01:57.197 net/mlx4: not in enabled drivers build config 00:01:57.197 net/mlx5: not in enabled drivers build config 00:01:57.197 net/mvneta: not in enabled drivers build config 00:01:57.197 net/mvpp2: not in enabled drivers build config 00:01:57.197 net/netvsc: not in enabled drivers build config 00:01:57.197 net/nfb: not in enabled drivers build config 00:01:57.197 net/nfp: not in enabled drivers build config 00:01:57.197 net/ngbe: not in enabled drivers build config 00:01:57.197 net/null: not in enabled drivers build config 00:01:57.197 net/octeontx: not in enabled drivers build config 00:01:57.197 net/octeon_ep: not in enabled drivers build config 00:01:57.197 net/pcap: not in enabled drivers build config 00:01:57.197 net/pfe: not in enabled drivers build config 00:01:57.197 net/qede: not in enabled drivers build config 00:01:57.197 net/ring: not in enabled drivers build config 00:01:57.197 net/sfc: not in enabled drivers build config 00:01:57.197 net/softnic: not in enabled drivers build config 00:01:57.197 net/tap: not in enabled drivers build config 00:01:57.197 net/thunderx: not in enabled drivers build config 00:01:57.197 net/txgbe: not in enabled drivers build config 00:01:57.197 net/vdev_netvsc: not in enabled drivers build config 00:01:57.197 net/vhost: not in enabled drivers build config 00:01:57.197 net/virtio: not in enabled drivers build config 00:01:57.197 net/vmxnet3: not in enabled drivers build config 00:01:57.197 raw/*: missing internal dependency, "rawdev" 00:01:57.197 crypto/armv8: not in enabled drivers build config 00:01:57.197 crypto/bcmfs: not in enabled drivers build config 00:01:57.197 crypto/caam_jr: not in enabled drivers build config 00:01:57.197 crypto/ccp: not in enabled drivers build config 00:01:57.197 crypto/cnxk: not in enabled drivers build config 00:01:57.197 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.197 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.197 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.197 crypto/mlx5: not in enabled drivers build config 00:01:57.197 crypto/mvsam: not in enabled drivers build config 00:01:57.197 crypto/nitrox: not in enabled drivers build config 00:01:57.197 crypto/null: not in enabled drivers build config 00:01:57.197 crypto/octeontx: not in enabled drivers build config 00:01:57.197 crypto/openssl: not in enabled drivers build config 00:01:57.197 crypto/scheduler: not in enabled drivers build config 00:01:57.197 crypto/uadk: not in enabled drivers build config 00:01:57.197 crypto/virtio: not in enabled drivers build config 00:01:57.197 compress/isal: not in enabled drivers build config 00:01:57.197 compress/mlx5: not in enabled drivers build config 00:01:57.197 compress/octeontx: not in enabled drivers build config 00:01:57.197 compress/zlib: not in enabled drivers build config 00:01:57.197 regex/*: missing internal dependency, "regexdev" 00:01:57.197 ml/*: missing internal dependency, "mldev" 00:01:57.197 vdpa/ifc: not in enabled drivers build config 00:01:57.197 vdpa/mlx5: not in enabled drivers build config 00:01:57.197 vdpa/nfp: not in enabled drivers build config 00:01:57.197 vdpa/sfc: not in enabled drivers build config 00:01:57.197 event/*: missing internal dependency, "eventdev" 00:01:57.197 baseband/*: missing internal dependency, "bbdev" 00:01:57.197 gpu/*: missing internal dependency, "gpudev" 00:01:57.197 00:01:57.197 00:01:57.464 Build targets in project: 85 00:01:57.464 00:01:57.464 DPDK 23.11.0 00:01:57.464 00:01:57.464 User defined options 00:01:57.464 buildtype : debug 00:01:57.464 default_library : shared 00:01:57.464 libdir : lib 00:01:57.464 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:57.464 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:57.464 c_link_args : 00:01:57.464 cpu_instruction_set: native 00:01:57.464 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:57.465 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:57.465 enable_docs : false 00:01:57.465 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.465 enable_kmods : false 00:01:57.465 tests : false 00:01:57.465 00:01:57.465 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.799 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.799 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.799 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.799 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.799 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.093 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.093 [6/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.093 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.093 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.093 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.093 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.093 [11/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.093 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.093 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.093 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.093 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.093 [16/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.093 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.093 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.093 [19/265] Linking static target lib/librte_log.a 00:01:58.093 [20/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.093 [21/265] Linking static target lib/librte_kvargs.a 00:01:58.093 [22/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.093 [23/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.093 [24/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.093 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.093 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.093 [27/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.093 [28/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.093 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.093 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.093 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.093 [32/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.093 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.093 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.093 [35/265] Linking static target lib/librte_pci.a 00:01:58.093 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.093 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.093 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.093 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.093 [40/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.369 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.369 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.369 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.369 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.369 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.369 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.370 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.370 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.370 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.370 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.370 [51/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.370 [52/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.370 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.370 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.370 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.370 [56/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.370 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.370 [58/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.370 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.370 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.370 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.370 [62/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.370 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.370 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.370 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.370 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.370 [67/265] Linking static target lib/librte_ring.a 00:01:58.370 [68/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.370 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.370 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.370 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.370 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.370 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.370 [74/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.370 [75/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.370 [76/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.370 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.370 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.370 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.370 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.370 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.370 [82/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.370 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.370 [84/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.370 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.370 [86/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.370 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.370 [88/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.370 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.370 [90/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.370 [91/265] Linking static target lib/librte_meter.a 00:01:58.370 [92/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.370 [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.370 [94/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.370 [95/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.370 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.370 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.370 [98/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.370 [99/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.370 [100/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.370 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.370 [102/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.370 [103/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.370 [104/265] Linking static target lib/librte_timer.a 00:01:58.370 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.370 [106/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.628 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.628 [108/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.628 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.629 [110/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.629 [111/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.629 [112/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.629 [113/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.629 [114/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.629 [115/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.629 [116/265] Linking static target lib/librte_cmdline.a 00:01:58.629 [117/265] Linking static target lib/librte_mempool.a 00:01:58.629 [118/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.629 [119/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.629 [120/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.629 [121/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.629 [122/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.629 [123/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.629 [124/265] Linking static target lib/librte_telemetry.a 00:01:58.629 [125/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.629 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.629 [127/265] Linking static target lib/librte_net.a 00:01:58.629 [128/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.629 [129/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.629 [130/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.629 [131/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.629 [132/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.629 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.629 [134/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.629 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.629 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.629 [137/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.629 [138/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.629 [139/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.629 [140/265] Linking static target lib/librte_rcu.a 00:01:58.629 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.629 [142/265] Linking static target lib/librte_dmadev.a 00:01:58.629 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.629 [144/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.629 [145/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.629 [146/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.629 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.629 [148/265] Linking static target lib/librte_compressdev.a 00:01:58.629 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.629 [150/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.629 [151/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.629 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.629 [153/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.629 [154/265] Linking static target lib/librte_eal.a 00:01:58.629 [155/265] Linking static target lib/librte_power.a 00:01:58.629 [156/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.629 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.629 [158/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.629 [159/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.629 [160/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.629 [161/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.629 [162/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.629 [163/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.629 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.629 [165/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.629 [166/265] Linking target lib/librte_log.so.24.0 00:01:58.629 [167/265] Linking static target lib/librte_reorder.a 00:01:58.629 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.629 [169/265] Linking static target lib/librte_mbuf.a 00:01:58.888 [170/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.888 [171/265] Linking static target lib/librte_security.a 00:01:58.888 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.888 [173/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.888 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.888 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.888 [176/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.888 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.888 [178/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.888 [179/265] Linking static target lib/librte_hash.a 00:01:58.888 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.888 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.888 [182/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.888 [183/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.888 [184/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.888 [185/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.888 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.888 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.888 [188/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.888 [189/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.888 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.888 [191/265] Linking target lib/librte_kvargs.so.24.0 00:01:58.888 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.888 [193/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.888 [194/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.888 [195/265] Linking static target drivers/librte_bus_vdev.a 00:01:58.888 [196/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.888 [197/265] Linking static target lib/librte_cryptodev.a 00:01:58.888 [198/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.146 [199/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.146 [200/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.146 [201/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.147 [202/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:59.147 [203/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.147 [204/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.147 [205/265] Linking static target drivers/librte_bus_pci.a 00:01:59.147 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.147 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.147 [208/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.147 [209/265] Linking target lib/librte_telemetry.so.24.0 00:01:59.147 [210/265] Linking static target drivers/librte_mempool_ring.a 00:01:59.147 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.147 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:59.147 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [215/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.472 [218/265] Linking static target lib/librte_ethdev.a 00:01:59.472 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.472 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.472 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.731 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.731 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.667 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.667 [226/265] Linking static target lib/librte_vhost.a 00:02:00.926 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.302 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.571 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.946 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.946 [231/265] Linking target lib/librte_eal.so.24.0 00:02:09.203 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:09.203 [233/265] Linking target lib/librte_dmadev.so.24.0 00:02:09.203 [234/265] Linking target lib/librte_ring.so.24.0 00:02:09.203 [235/265] Linking target lib/librte_meter.so.24.0 00:02:09.203 [236/265] Linking target lib/librte_pci.so.24.0 00:02:09.203 [237/265] Linking target lib/librte_timer.so.24.0 00:02:09.203 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:09.203 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:09.203 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:09.203 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:09.203 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:09.203 [243/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:09.203 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:09.203 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:09.203 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:09.462 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:09.462 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:09.462 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:09.462 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:09.462 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.719 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:09.719 [253/265] Linking target lib/librte_net.so.24.0 00:02:09.719 [254/265] Linking target lib/librte_reorder.so.24.0 00:02:09.719 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:09.719 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.719 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.719 [258/265] Linking target lib/librte_cmdline.so.24.0 00:02:09.719 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:09.719 [260/265] Linking target lib/librte_security.so.24.0 00:02:09.719 [261/265] Linking target lib/librte_hash.so.24.0 00:02:09.977 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:09.977 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:09.977 [264/265] Linking target lib/librte_power.so.24.0 00:02:09.977 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:09.977 INFO: autodetecting backend as ninja 00:02:09.977 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:10.911 CC lib/ut_mock/mock.o 00:02:10.911 CC lib/log/log.o 00:02:10.911 CC lib/log/log_flags.o 00:02:10.911 CC lib/log/log_deprecated.o 00:02:10.911 CC lib/ut/ut.o 00:02:10.911 LIB libspdk_ut_mock.a 00:02:10.911 LIB libspdk_log.a 00:02:10.911 SO libspdk_ut_mock.so.5.0 00:02:10.911 LIB libspdk_ut.a 00:02:10.911 SO libspdk_log.so.6.1 00:02:10.911 SYMLINK libspdk_ut_mock.so 00:02:10.911 SO libspdk_ut.so.1.0 00:02:10.911 SYMLINK libspdk_log.so 00:02:10.911 SYMLINK libspdk_ut.so 00:02:11.169 CC lib/dma/dma.o 00:02:11.169 CC lib/util/base64.o 00:02:11.169 CC lib/util/bit_array.o 00:02:11.169 CC lib/util/cpuset.o 00:02:11.169 CC lib/util/crc16.o 00:02:11.169 CC lib/ioat/ioat.o 00:02:11.169 CC lib/util/crc32.o 00:02:11.169 CC lib/util/crc32_ieee.o 00:02:11.169 CXX lib/trace_parser/trace.o 00:02:11.169 CC lib/util/crc32c.o 00:02:11.169 CC lib/util/crc64.o 00:02:11.169 CC lib/util/dif.o 00:02:11.169 CC lib/util/fd.o 00:02:11.169 CC lib/util/file.o 00:02:11.169 CC lib/util/hexlify.o 00:02:11.170 CC lib/util/iov.o 00:02:11.170 CC lib/util/math.o 00:02:11.170 CC lib/util/pipe.o 00:02:11.170 CC lib/util/strerror_tls.o 00:02:11.170 CC lib/util/string.o 00:02:11.170 CC lib/util/uuid.o 00:02:11.170 CC lib/util/fd_group.o 00:02:11.170 CC lib/util/xor.o 00:02:11.170 CC lib/util/zipf.o 00:02:11.170 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.170 CC lib/vfio_user/host/vfio_user.o 00:02:11.428 LIB libspdk_dma.a 00:02:11.428 SO libspdk_dma.so.3.0 00:02:11.428 SYMLINK libspdk_dma.so 00:02:11.428 LIB libspdk_ioat.a 00:02:11.428 SO libspdk_ioat.so.6.0 00:02:11.428 LIB libspdk_vfio_user.a 00:02:11.428 SO libspdk_vfio_user.so.4.0 00:02:11.428 SYMLINK libspdk_ioat.so 00:02:11.686 SYMLINK libspdk_vfio_user.so 00:02:11.686 LIB libspdk_util.a 00:02:11.686 SO libspdk_util.so.8.0 00:02:11.686 SYMLINK libspdk_util.so 00:02:11.943 LIB libspdk_trace_parser.a 00:02:11.943 SO libspdk_trace_parser.so.4.0 00:02:11.943 CC lib/json/json_parse.o 00:02:11.943 CC lib/json/json_util.o 00:02:11.943 CC lib/json/json_write.o 00:02:11.943 CC lib/idxd/idxd.o 00:02:11.943 CC lib/idxd/idxd_user.o 00:02:11.943 CC lib/idxd/idxd_kernel.o 00:02:11.943 SYMLINK libspdk_trace_parser.so 00:02:11.943 CC lib/rdma/common.o 00:02:11.943 CC lib/rdma/rdma_verbs.o 00:02:11.943 CC lib/conf/conf.o 00:02:11.943 CC lib/vmd/vmd.o 00:02:11.943 CC lib/vmd/led.o 00:02:11.943 CC lib/env_dpdk/env.o 00:02:11.943 CC lib/env_dpdk/memory.o 00:02:11.943 CC lib/env_dpdk/init.o 00:02:11.943 CC lib/env_dpdk/pci.o 00:02:11.943 CC lib/env_dpdk/pci_ioat.o 00:02:11.943 CC lib/env_dpdk/threads.o 00:02:11.943 CC lib/env_dpdk/pci_virtio.o 00:02:11.944 CC lib/env_dpdk/pci_vmd.o 00:02:11.944 CC lib/env_dpdk/pci_event.o 00:02:11.944 CC lib/env_dpdk/pci_idxd.o 00:02:11.944 CC lib/env_dpdk/sigbus_handler.o 00:02:11.944 CC lib/env_dpdk/pci_dpdk.o 00:02:11.944 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.944 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.201 LIB libspdk_conf.a 00:02:12.201 LIB libspdk_json.a 00:02:12.201 LIB libspdk_rdma.a 00:02:12.201 SO libspdk_conf.so.5.0 00:02:12.201 SO libspdk_json.so.5.1 00:02:12.201 SO libspdk_rdma.so.5.0 00:02:12.201 SYMLINK libspdk_conf.so 00:02:12.201 SYMLINK libspdk_json.so 00:02:12.201 SYMLINK libspdk_rdma.so 00:02:12.201 LIB libspdk_idxd.a 00:02:12.459 SO libspdk_idxd.so.11.0 00:02:12.459 LIB libspdk_vmd.a 00:02:12.459 SYMLINK libspdk_idxd.so 00:02:12.459 SO libspdk_vmd.so.5.0 00:02:12.459 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.459 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.459 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.459 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.459 SYMLINK libspdk_vmd.so 00:02:12.717 LIB libspdk_jsonrpc.a 00:02:12.717 SO libspdk_jsonrpc.so.5.1 00:02:12.717 SYMLINK libspdk_jsonrpc.so 00:02:12.717 LIB libspdk_env_dpdk.a 00:02:12.976 SO libspdk_env_dpdk.so.13.0 00:02:12.976 CC lib/rpc/rpc.o 00:02:12.976 SYMLINK libspdk_env_dpdk.so 00:02:12.976 LIB libspdk_rpc.a 00:02:13.234 SO libspdk_rpc.so.5.0 00:02:13.234 SYMLINK libspdk_rpc.so 00:02:13.234 CC lib/notify/notify.o 00:02:13.234 CC lib/notify/notify_rpc.o 00:02:13.234 CC lib/trace/trace.o 00:02:13.234 CC lib/trace/trace_flags.o 00:02:13.234 CC lib/trace/trace_rpc.o 00:02:13.492 CC lib/sock/sock.o 00:02:13.492 CC lib/sock/sock_rpc.o 00:02:13.492 LIB libspdk_notify.a 00:02:13.492 SO libspdk_notify.so.5.0 00:02:13.492 LIB libspdk_trace.a 00:02:13.492 SYMLINK libspdk_notify.so 00:02:13.492 SO libspdk_trace.so.9.0 00:02:13.750 SYMLINK libspdk_trace.so 00:02:13.750 LIB libspdk_sock.a 00:02:13.750 SO libspdk_sock.so.8.0 00:02:13.750 SYMLINK libspdk_sock.so 00:02:13.750 CC lib/thread/thread.o 00:02:13.750 CC lib/thread/iobuf.o 00:02:14.008 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.008 CC lib/nvme/nvme_ctrlr.o 00:02:14.008 CC lib/nvme/nvme_fabric.o 00:02:14.008 CC lib/nvme/nvme_ns_cmd.o 00:02:14.008 CC lib/nvme/nvme_ns.o 00:02:14.008 CC lib/nvme/nvme_pcie_common.o 00:02:14.008 CC lib/nvme/nvme_pcie.o 00:02:14.008 CC lib/nvme/nvme_quirks.o 00:02:14.008 CC lib/nvme/nvme_qpair.o 00:02:14.008 CC lib/nvme/nvme_transport.o 00:02:14.008 CC lib/nvme/nvme.o 00:02:14.008 CC lib/nvme/nvme_discovery.o 00:02:14.008 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.008 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.008 CC lib/nvme/nvme_tcp.o 00:02:14.008 CC lib/nvme/nvme_opal.o 00:02:14.008 CC lib/nvme/nvme_io_msg.o 00:02:14.008 CC lib/nvme/nvme_poll_group.o 00:02:14.008 CC lib/nvme/nvme_zns.o 00:02:14.008 CC lib/nvme/nvme_cuse.o 00:02:14.008 CC lib/nvme/nvme_vfio_user.o 00:02:14.008 CC lib/nvme/nvme_rdma.o 00:02:14.939 LIB libspdk_thread.a 00:02:14.939 SO libspdk_thread.so.9.0 00:02:14.939 SYMLINK libspdk_thread.so 00:02:15.196 CC lib/blob/blobstore.o 00:02:15.196 CC lib/blob/request.o 00:02:15.196 CC lib/blob/zeroes.o 00:02:15.196 CC lib/blob/blob_bs_dev.o 00:02:15.196 CC lib/accel/accel.o 00:02:15.196 CC lib/accel/accel_rpc.o 00:02:15.196 CC lib/accel/accel_sw.o 00:02:15.196 CC lib/init/subsystem_rpc.o 00:02:15.196 CC lib/init/json_config.o 00:02:15.196 CC lib/init/subsystem.o 00:02:15.196 CC lib/init/rpc.o 00:02:15.196 CC lib/virtio/virtio.o 00:02:15.196 CC lib/virtio/virtio_vhost_user.o 00:02:15.196 CC lib/virtio/virtio_vfio_user.o 00:02:15.196 CC lib/virtio/virtio_pci.o 00:02:15.196 LIB libspdk_init.a 00:02:15.196 SO libspdk_init.so.4.0 00:02:15.196 LIB libspdk_nvme.a 00:02:15.454 LIB libspdk_virtio.a 00:02:15.454 SYMLINK libspdk_init.so 00:02:15.454 SO libspdk_virtio.so.6.0 00:02:15.454 SO libspdk_nvme.so.12.0 00:02:15.454 SYMLINK libspdk_virtio.so 00:02:15.454 CC lib/event/app.o 00:02:15.454 CC lib/event/reactor.o 00:02:15.454 CC lib/event/log_rpc.o 00:02:15.454 CC lib/event/app_rpc.o 00:02:15.454 SYMLINK libspdk_nvme.so 00:02:15.454 CC lib/event/scheduler_static.o 00:02:15.712 LIB libspdk_accel.a 00:02:15.712 SO libspdk_accel.so.14.0 00:02:15.969 SYMLINK libspdk_accel.so 00:02:15.969 LIB libspdk_event.a 00:02:15.969 SO libspdk_event.so.12.0 00:02:15.969 SYMLINK libspdk_event.so 00:02:15.969 CC lib/bdev/bdev.o 00:02:15.969 CC lib/bdev/bdev_rpc.o 00:02:15.969 CC lib/bdev/bdev_zone.o 00:02:15.969 CC lib/bdev/part.o 00:02:15.969 CC lib/bdev/scsi_nvme.o 00:02:16.903 LIB libspdk_blob.a 00:02:16.903 SO libspdk_blob.so.10.1 00:02:16.903 SYMLINK libspdk_blob.so 00:02:17.161 CC lib/blobfs/blobfs.o 00:02:17.161 CC lib/blobfs/tree.o 00:02:17.161 CC lib/lvol/lvol.o 00:02:17.731 LIB libspdk_bdev.a 00:02:17.731 LIB libspdk_blobfs.a 00:02:17.731 SO libspdk_bdev.so.14.0 00:02:17.731 SO libspdk_blobfs.so.9.0 00:02:17.731 LIB libspdk_lvol.a 00:02:17.731 SO libspdk_lvol.so.9.1 00:02:17.731 SYMLINK libspdk_blobfs.so 00:02:17.731 SYMLINK libspdk_bdev.so 00:02:17.731 SYMLINK libspdk_lvol.so 00:02:17.988 CC lib/nvmf/ctrlr.o 00:02:17.988 CC lib/nvmf/ctrlr_discovery.o 00:02:17.989 CC lib/nvmf/ctrlr_bdev.o 00:02:17.989 CC lib/nvmf/subsystem.o 00:02:17.989 CC lib/nvmf/nvmf.o 00:02:17.989 CC lib/scsi/dev.o 00:02:17.989 CC lib/scsi/lun.o 00:02:17.989 CC lib/nvmf/nvmf_rpc.o 00:02:17.989 CC lib/nvmf/transport.o 00:02:17.989 CC lib/scsi/port.o 00:02:17.989 CC lib/nvmf/tcp.o 00:02:17.989 CC lib/scsi/scsi.o 00:02:17.989 CC lib/scsi/scsi_pr.o 00:02:17.989 CC lib/nvmf/rdma.o 00:02:17.989 CC lib/scsi/scsi_bdev.o 00:02:17.989 CC lib/scsi/scsi_rpc.o 00:02:17.989 CC lib/scsi/task.o 00:02:17.989 CC lib/nbd/nbd.o 00:02:17.989 CC lib/nbd/nbd_rpc.o 00:02:17.989 CC lib/ftl/ftl_core.o 00:02:17.989 CC lib/ftl/ftl_debug.o 00:02:17.989 CC lib/ftl/ftl_init.o 00:02:17.989 CC lib/ublk/ublk.o 00:02:17.989 CC lib/ublk/ublk_rpc.o 00:02:17.989 CC lib/ftl/ftl_layout.o 00:02:17.989 CC lib/ftl/ftl_io.o 00:02:17.989 CC lib/ftl/ftl_sb.o 00:02:17.989 CC lib/ftl/ftl_l2p.o 00:02:17.989 CC lib/ftl/ftl_l2p_flat.o 00:02:17.989 CC lib/ftl/ftl_nv_cache.o 00:02:17.989 CC lib/ftl/ftl_band.o 00:02:17.989 CC lib/ftl/ftl_band_ops.o 00:02:17.989 CC lib/ftl/ftl_writer.o 00:02:17.989 CC lib/ftl/ftl_rq.o 00:02:17.989 CC lib/ftl/ftl_reloc.o 00:02:17.989 CC lib/ftl/ftl_l2p_cache.o 00:02:17.989 CC lib/ftl/ftl_p2l.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.989 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.989 CC lib/ftl/utils/ftl_conf.o 00:02:17.989 CC lib/ftl/utils/ftl_mempool.o 00:02:17.989 CC lib/ftl/utils/ftl_md.o 00:02:17.989 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.989 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.989 CC lib/ftl/utils/ftl_property.o 00:02:17.989 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.989 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.989 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.989 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.989 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.989 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.989 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.989 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.989 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.989 CC lib/ftl/base/ftl_base_dev.o 00:02:17.989 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.989 CC lib/ftl/ftl_trace.o 00:02:18.247 LIB libspdk_nbd.a 00:02:18.247 SO libspdk_nbd.so.6.0 00:02:18.506 LIB libspdk_scsi.a 00:02:18.506 SYMLINK libspdk_nbd.so 00:02:18.506 SO libspdk_scsi.so.8.0 00:02:18.506 SYMLINK libspdk_scsi.so 00:02:18.506 LIB libspdk_ublk.a 00:02:18.506 SO libspdk_ublk.so.2.0 00:02:18.506 SYMLINK libspdk_ublk.so 00:02:18.765 CC lib/iscsi/conn.o 00:02:18.765 CC lib/iscsi/init_grp.o 00:02:18.765 CC lib/iscsi/iscsi.o 00:02:18.765 CC lib/iscsi/md5.o 00:02:18.765 CC lib/iscsi/param.o 00:02:18.765 CC lib/iscsi/portal_grp.o 00:02:18.765 CC lib/iscsi/tgt_node.o 00:02:18.765 CC lib/iscsi/iscsi_subsystem.o 00:02:18.765 CC lib/iscsi/iscsi_rpc.o 00:02:18.765 CC lib/iscsi/task.o 00:02:18.765 CC lib/vhost/vhost.o 00:02:18.765 CC lib/vhost/vhost_rpc.o 00:02:18.765 CC lib/vhost/vhost_scsi.o 00:02:18.765 CC lib/vhost/vhost_blk.o 00:02:18.765 CC lib/vhost/rte_vhost_user.o 00:02:18.765 LIB libspdk_ftl.a 00:02:18.765 SO libspdk_ftl.so.8.0 00:02:19.024 SYMLINK libspdk_ftl.so 00:02:19.282 LIB libspdk_nvmf.a 00:02:19.282 SO libspdk_nvmf.so.17.0 00:02:19.283 LIB libspdk_vhost.a 00:02:19.542 SO libspdk_vhost.so.7.1 00:02:19.542 SYMLINK libspdk_nvmf.so 00:02:19.542 SYMLINK libspdk_vhost.so 00:02:19.542 LIB libspdk_iscsi.a 00:02:19.542 SO libspdk_iscsi.so.7.0 00:02:19.801 SYMLINK libspdk_iscsi.so 00:02:20.059 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.059 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.059 CC module/accel/dsa/accel_dsa.o 00:02:20.059 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.059 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.059 CC module/accel/iaa/accel_iaa.o 00:02:20.059 CC module/accel/error/accel_error.o 00:02:20.059 CC module/accel/error/accel_error_rpc.o 00:02:20.059 CC module/sock/posix/posix.o 00:02:20.059 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.059 CC module/accel/ioat/accel_ioat.o 00:02:20.059 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.059 CC module/blob/bdev/blob_bdev.o 00:02:20.059 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.059 LIB libspdk_env_dpdk_rpc.a 00:02:20.059 SO libspdk_env_dpdk_rpc.so.5.0 00:02:20.318 LIB libspdk_scheduler_gscheduler.a 00:02:20.318 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.318 LIB libspdk_scheduler_dynamic.a 00:02:20.318 LIB libspdk_accel_error.a 00:02:20.318 LIB libspdk_accel_ioat.a 00:02:20.318 SO libspdk_scheduler_gscheduler.so.3.0 00:02:20.318 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.318 SO libspdk_scheduler_dynamic.so.3.0 00:02:20.318 LIB libspdk_accel_iaa.a 00:02:20.318 SO libspdk_accel_ioat.so.5.0 00:02:20.318 SO libspdk_accel_error.so.1.0 00:02:20.318 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:20.318 LIB libspdk_accel_dsa.a 00:02:20.318 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.318 SO libspdk_accel_iaa.so.2.0 00:02:20.318 LIB libspdk_blob_bdev.a 00:02:20.318 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.318 SO libspdk_accel_dsa.so.4.0 00:02:20.318 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.318 SO libspdk_blob_bdev.so.10.1 00:02:20.318 SYMLINK libspdk_accel_ioat.so 00:02:20.318 SYMLINK libspdk_accel_error.so 00:02:20.318 SYMLINK libspdk_accel_iaa.so 00:02:20.318 SYMLINK libspdk_blob_bdev.so 00:02:20.318 SYMLINK libspdk_accel_dsa.so 00:02:20.577 LIB libspdk_sock_posix.a 00:02:20.577 SO libspdk_sock_posix.so.5.0 00:02:20.577 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.577 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.577 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.577 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.577 CC module/bdev/aio/bdev_aio.o 00:02:20.577 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.577 CC module/bdev/error/vbdev_error.o 00:02:20.577 CC module/bdev/null/bdev_null.o 00:02:20.577 CC module/bdev/null/bdev_null_rpc.o 00:02:20.577 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.835 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.835 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.835 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.835 SYMLINK libspdk_sock_posix.so 00:02:20.835 CC module/bdev/delay/vbdev_delay.o 00:02:20.835 CC module/bdev/gpt/gpt.o 00:02:20.835 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.835 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.835 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.835 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.835 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.835 CC module/bdev/raid/bdev_raid.o 00:02:20.835 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.835 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.835 CC module/bdev/raid/raid0.o 00:02:20.835 CC module/bdev/ftl/bdev_ftl.o 00:02:20.835 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.835 CC module/bdev/raid/raid1.o 00:02:20.835 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.835 CC module/bdev/raid/concat.o 00:02:20.835 CC module/bdev/nvme/bdev_nvme.o 00:02:20.835 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.835 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.835 CC module/bdev/nvme/nvme_rpc.o 00:02:20.835 CC module/bdev/malloc/bdev_malloc.o 00:02:20.835 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.835 CC module/bdev/split/vbdev_split.o 00:02:20.835 CC module/bdev/nvme/vbdev_opal.o 00:02:20.835 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.835 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.835 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.835 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.835 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.835 LIB libspdk_blobfs_bdev.a 00:02:20.835 SO libspdk_blobfs_bdev.so.5.0 00:02:20.835 LIB libspdk_bdev_null.a 00:02:20.835 LIB libspdk_bdev_error.a 00:02:20.835 LIB libspdk_bdev_split.a 00:02:20.835 SO libspdk_bdev_null.so.5.0 00:02:20.835 LIB libspdk_bdev_gpt.a 00:02:20.835 SO libspdk_bdev_error.so.5.0 00:02:20.835 LIB libspdk_bdev_passthru.a 00:02:21.094 SO libspdk_bdev_gpt.so.5.0 00:02:21.094 SYMLINK libspdk_blobfs_bdev.so 00:02:21.094 SO libspdk_bdev_split.so.5.0 00:02:21.094 SO libspdk_bdev_passthru.so.5.0 00:02:21.094 LIB libspdk_bdev_ftl.a 00:02:21.094 SYMLINK libspdk_bdev_null.so 00:02:21.094 SYMLINK libspdk_bdev_error.so 00:02:21.094 LIB libspdk_bdev_aio.a 00:02:21.094 LIB libspdk_bdev_zone_block.a 00:02:21.094 SO libspdk_bdev_ftl.so.5.0 00:02:21.094 SYMLINK libspdk_bdev_gpt.so 00:02:21.094 SO libspdk_bdev_zone_block.so.5.0 00:02:21.094 SO libspdk_bdev_aio.so.5.0 00:02:21.094 SYMLINK libspdk_bdev_passthru.so 00:02:21.094 LIB libspdk_bdev_delay.a 00:02:21.094 SYMLINK libspdk_bdev_split.so 00:02:21.094 LIB libspdk_bdev_malloc.a 00:02:21.094 LIB libspdk_bdev_iscsi.a 00:02:21.094 SO libspdk_bdev_delay.so.5.0 00:02:21.094 SO libspdk_bdev_iscsi.so.5.0 00:02:21.094 SYMLINK libspdk_bdev_ftl.so 00:02:21.094 SYMLINK libspdk_bdev_zone_block.so 00:02:21.094 SO libspdk_bdev_malloc.so.5.0 00:02:21.094 SYMLINK libspdk_bdev_aio.so 00:02:21.094 SYMLINK libspdk_bdev_delay.so 00:02:21.094 LIB libspdk_bdev_lvol.a 00:02:21.094 SYMLINK libspdk_bdev_iscsi.so 00:02:21.094 SYMLINK libspdk_bdev_malloc.so 00:02:21.094 LIB libspdk_bdev_virtio.a 00:02:21.094 SO libspdk_bdev_lvol.so.5.0 00:02:21.094 SO libspdk_bdev_virtio.so.5.0 00:02:21.353 SYMLINK libspdk_bdev_lvol.so 00:02:21.353 SYMLINK libspdk_bdev_virtio.so 00:02:21.353 LIB libspdk_bdev_raid.a 00:02:21.353 SO libspdk_bdev_raid.so.5.0 00:02:21.353 SYMLINK libspdk_bdev_raid.so 00:02:22.291 LIB libspdk_bdev_nvme.a 00:02:22.291 SO libspdk_bdev_nvme.so.6.0 00:02:22.291 SYMLINK libspdk_bdev_nvme.so 00:02:22.550 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.550 CC module/event/subsystems/vmd/vmd.o 00:02:22.550 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.550 CC module/event/subsystems/scheduler/scheduler.o 00:02:22.550 CC module/event/subsystems/sock/sock.o 00:02:22.550 CC module/event/subsystems/iobuf/iobuf.o 00:02:22.550 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:22.809 LIB libspdk_event_vhost_blk.a 00:02:22.809 SO libspdk_event_vhost_blk.so.2.0 00:02:22.809 LIB libspdk_event_sock.a 00:02:22.809 LIB libspdk_event_scheduler.a 00:02:22.809 LIB libspdk_event_vmd.a 00:02:22.809 LIB libspdk_event_iobuf.a 00:02:22.809 SO libspdk_event_sock.so.4.0 00:02:22.809 SO libspdk_event_vmd.so.5.0 00:02:22.809 SYMLINK libspdk_event_vhost_blk.so 00:02:22.809 SO libspdk_event_scheduler.so.3.0 00:02:22.809 SO libspdk_event_iobuf.so.2.0 00:02:22.809 SYMLINK libspdk_event_sock.so 00:02:22.809 SYMLINK libspdk_event_vmd.so 00:02:22.809 SYMLINK libspdk_event_scheduler.so 00:02:22.809 SYMLINK libspdk_event_iobuf.so 00:02:23.068 CC module/event/subsystems/accel/accel.o 00:02:23.327 LIB libspdk_event_accel.a 00:02:23.327 SO libspdk_event_accel.so.5.0 00:02:23.327 SYMLINK libspdk_event_accel.so 00:02:23.586 CC module/event/subsystems/bdev/bdev.o 00:02:23.586 LIB libspdk_event_bdev.a 00:02:23.586 SO libspdk_event_bdev.so.5.0 00:02:23.846 SYMLINK libspdk_event_bdev.so 00:02:23.846 CC module/event/subsystems/scsi/scsi.o 00:02:23.846 CC module/event/subsystems/nbd/nbd.o 00:02:23.846 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.846 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.846 CC module/event/subsystems/ublk/ublk.o 00:02:24.105 LIB libspdk_event_nbd.a 00:02:24.105 LIB libspdk_event_scsi.a 00:02:24.105 LIB libspdk_event_ublk.a 00:02:24.105 SO libspdk_event_nbd.so.5.0 00:02:24.105 SO libspdk_event_scsi.so.5.0 00:02:24.105 SO libspdk_event_ublk.so.2.0 00:02:24.105 LIB libspdk_event_nvmf.a 00:02:24.105 SYMLINK libspdk_event_nbd.so 00:02:24.105 SYMLINK libspdk_event_scsi.so 00:02:24.105 SO libspdk_event_nvmf.so.5.0 00:02:24.105 SYMLINK libspdk_event_ublk.so 00:02:24.105 SYMLINK libspdk_event_nvmf.so 00:02:24.364 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.364 CC module/event/subsystems/iscsi/iscsi.o 00:02:24.364 LIB libspdk_event_vhost_scsi.a 00:02:24.364 SO libspdk_event_vhost_scsi.so.2.0 00:02:24.364 LIB libspdk_event_iscsi.a 00:02:24.623 SO libspdk_event_iscsi.so.5.0 00:02:24.623 SYMLINK libspdk_event_vhost_scsi.so 00:02:24.623 SYMLINK libspdk_event_iscsi.so 00:02:24.623 SO libspdk.so.5.0 00:02:24.623 SYMLINK libspdk.so 00:02:24.883 CC app/spdk_lspci/spdk_lspci.o 00:02:24.883 CC app/trace_record/trace_record.o 00:02:24.883 CXX app/trace/trace.o 00:02:24.883 CC app/spdk_nvme_identify/identify.o 00:02:24.883 CC app/spdk_top/spdk_top.o 00:02:24.883 TEST_HEADER include/spdk/accel_module.h 00:02:24.883 TEST_HEADER include/spdk/accel.h 00:02:24.883 TEST_HEADER include/spdk/barrier.h 00:02:24.883 TEST_HEADER include/spdk/base64.h 00:02:24.883 TEST_HEADER include/spdk/bdev.h 00:02:24.883 TEST_HEADER include/spdk/bdev_module.h 00:02:24.883 TEST_HEADER include/spdk/assert.h 00:02:24.883 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.883 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.883 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.883 TEST_HEADER include/spdk/bit_array.h 00:02:24.883 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.883 TEST_HEADER include/spdk/bit_pool.h 00:02:24.883 TEST_HEADER include/spdk/blobfs.h 00:02:24.883 CC test/rpc_client/rpc_client_test.o 00:02:24.883 CC app/spdk_nvme_perf/perf.o 00:02:24.883 TEST_HEADER include/spdk/conf.h 00:02:24.883 TEST_HEADER include/spdk/config.h 00:02:24.883 TEST_HEADER include/spdk/blob.h 00:02:24.883 TEST_HEADER include/spdk/cpuset.h 00:02:24.883 TEST_HEADER include/spdk/crc16.h 00:02:24.883 TEST_HEADER include/spdk/crc64.h 00:02:24.883 TEST_HEADER include/spdk/crc32.h 00:02:24.883 TEST_HEADER include/spdk/dif.h 00:02:24.883 TEST_HEADER include/spdk/endian.h 00:02:24.883 TEST_HEADER include/spdk/dma.h 00:02:24.883 TEST_HEADER include/spdk/env.h 00:02:24.883 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.883 CC app/nvmf_tgt/nvmf_main.o 00:02:24.883 TEST_HEADER include/spdk/fd_group.h 00:02:24.883 TEST_HEADER include/spdk/fd.h 00:02:24.883 TEST_HEADER include/spdk/event.h 00:02:24.883 TEST_HEADER include/spdk/file.h 00:02:24.883 TEST_HEADER include/spdk/ftl.h 00:02:24.883 TEST_HEADER include/spdk/hexlify.h 00:02:24.883 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.883 TEST_HEADER include/spdk/histogram_data.h 00:02:24.883 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.883 TEST_HEADER include/spdk/idxd.h 00:02:24.883 TEST_HEADER include/spdk/init.h 00:02:24.883 CC app/spdk_dd/spdk_dd.o 00:02:24.883 TEST_HEADER include/spdk/ioat.h 00:02:24.883 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.883 TEST_HEADER include/spdk/json.h 00:02:24.883 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.883 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.883 TEST_HEADER include/spdk/likely.h 00:02:24.883 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.883 CC app/vhost/vhost.o 00:02:24.883 TEST_HEADER include/spdk/log.h 00:02:24.883 TEST_HEADER include/spdk/lvol.h 00:02:24.883 TEST_HEADER include/spdk/memory.h 00:02:24.883 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.883 TEST_HEADER include/spdk/nbd.h 00:02:24.883 TEST_HEADER include/spdk/mmio.h 00:02:24.883 CC app/spdk_tgt/spdk_tgt.o 00:02:24.883 TEST_HEADER include/spdk/nvme.h 00:02:24.883 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.883 TEST_HEADER include/spdk/notify.h 00:02:24.883 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.883 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.883 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.883 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.883 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.883 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.883 TEST_HEADER include/spdk/nvmf.h 00:02:24.883 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.883 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.883 TEST_HEADER include/spdk/opal.h 00:02:24.883 TEST_HEADER include/spdk/pci_ids.h 00:02:24.883 TEST_HEADER include/spdk/opal_spec.h 00:02:24.883 TEST_HEADER include/spdk/pipe.h 00:02:24.883 TEST_HEADER include/spdk/queue.h 00:02:24.883 TEST_HEADER include/spdk/rpc.h 00:02:24.883 TEST_HEADER include/spdk/reduce.h 00:02:24.883 TEST_HEADER include/spdk/scheduler.h 00:02:24.883 TEST_HEADER include/spdk/scsi.h 00:02:24.883 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.883 TEST_HEADER include/spdk/stdinc.h 00:02:24.883 TEST_HEADER include/spdk/sock.h 00:02:24.883 TEST_HEADER include/spdk/string.h 00:02:24.883 TEST_HEADER include/spdk/trace.h 00:02:24.883 TEST_HEADER include/spdk/thread.h 00:02:24.883 TEST_HEADER include/spdk/trace_parser.h 00:02:24.883 TEST_HEADER include/spdk/tree.h 00:02:24.883 TEST_HEADER include/spdk/util.h 00:02:24.883 TEST_HEADER include/spdk/ublk.h 00:02:24.883 TEST_HEADER include/spdk/version.h 00:02:24.883 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.883 TEST_HEADER include/spdk/uuid.h 00:02:24.883 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.883 TEST_HEADER include/spdk/vhost.h 00:02:24.883 TEST_HEADER include/spdk/vmd.h 00:02:24.883 TEST_HEADER include/spdk/xor.h 00:02:24.883 TEST_HEADER include/spdk/zipf.h 00:02:25.152 CXX test/cpp_headers/accel_module.o 00:02:25.152 CXX test/cpp_headers/accel.o 00:02:25.152 CXX test/cpp_headers/assert.o 00:02:25.152 CXX test/cpp_headers/barrier.o 00:02:25.152 CXX test/cpp_headers/base64.o 00:02:25.152 CXX test/cpp_headers/bdev_zone.o 00:02:25.152 CXX test/cpp_headers/bdev_module.o 00:02:25.152 CXX test/cpp_headers/bdev.o 00:02:25.152 CXX test/cpp_headers/bit_pool.o 00:02:25.152 CXX test/cpp_headers/bit_array.o 00:02:25.152 CXX test/cpp_headers/blob_bdev.o 00:02:25.152 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.152 CXX test/cpp_headers/blobfs.o 00:02:25.152 CXX test/cpp_headers/blob.o 00:02:25.152 CC examples/vmd/led/led.o 00:02:25.152 CXX test/cpp_headers/conf.o 00:02:25.152 CXX test/cpp_headers/config.o 00:02:25.152 CXX test/cpp_headers/crc16.o 00:02:25.152 CC examples/ioat/verify/verify.o 00:02:25.152 CXX test/cpp_headers/crc32.o 00:02:25.152 CC examples/idxd/perf/perf.o 00:02:25.152 CXX test/cpp_headers/crc64.o 00:02:25.152 CXX test/cpp_headers/cpuset.o 00:02:25.152 CXX test/cpp_headers/dif.o 00:02:25.152 CXX test/cpp_headers/dma.o 00:02:25.152 CXX test/cpp_headers/endian.o 00:02:25.152 CXX test/cpp_headers/env_dpdk.o 00:02:25.152 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.152 CXX test/cpp_headers/env.o 00:02:25.152 CXX test/cpp_headers/event.o 00:02:25.152 CXX test/cpp_headers/fd_group.o 00:02:25.152 CXX test/cpp_headers/fd.o 00:02:25.152 CXX test/cpp_headers/ftl.o 00:02:25.152 CXX test/cpp_headers/file.o 00:02:25.152 CXX test/cpp_headers/gpt_spec.o 00:02:25.152 CXX test/cpp_headers/hexlify.o 00:02:25.152 CXX test/cpp_headers/histogram_data.o 00:02:25.152 CXX test/cpp_headers/idxd.o 00:02:25.152 CXX test/cpp_headers/idxd_spec.o 00:02:25.153 CXX test/cpp_headers/init.o 00:02:25.153 CXX test/cpp_headers/ioat.o 00:02:25.153 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.153 CC examples/ioat/perf/perf.o 00:02:25.153 CC test/thread/poller_perf/poller_perf.o 00:02:25.153 CC examples/nvme/abort/abort.o 00:02:25.153 CC examples/nvme/reconnect/reconnect.o 00:02:25.153 CC test/nvme/reset/reset.o 00:02:25.153 CC test/nvme/connect_stress/connect_stress.o 00:02:25.153 CC examples/accel/perf/accel_perf.o 00:02:25.153 CC test/env/pci/pci_ut.o 00:02:25.153 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.153 CC test/nvme/compliance/nvme_compliance.o 00:02:25.153 CC examples/nvmf/nvmf/nvmf.o 00:02:25.153 CC test/app/histogram_perf/histogram_perf.o 00:02:25.153 CC test/nvme/e2edp/nvme_dp.o 00:02:25.153 CC examples/nvme/arbitration/arbitration.o 00:02:25.153 CC test/app/stub/stub.o 00:02:25.153 CC examples/nvme/hotplug/hotplug.o 00:02:25.153 CC test/env/memory/memory_ut.o 00:02:25.153 CC test/nvme/overhead/overhead.o 00:02:25.153 CC test/nvme/boot_partition/boot_partition.o 00:02:25.153 CC test/event/reactor/reactor.o 00:02:25.153 CC test/nvme/simple_copy/simple_copy.o 00:02:25.153 CC examples/nvme/hello_world/hello_world.o 00:02:25.153 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.153 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.153 CC test/bdev/bdevio/bdevio.o 00:02:25.153 CC test/nvme/aer/aer.o 00:02:25.153 CC test/nvme/startup/startup.o 00:02:25.153 CC examples/blob/hello_world/hello_blob.o 00:02:25.153 CC test/env/vtophys/vtophys.o 00:02:25.153 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.153 CC test/nvme/reserve/reserve.o 00:02:25.153 CC examples/thread/thread/thread_ex.o 00:02:25.153 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.153 CC examples/sock/hello_world/hello_sock.o 00:02:25.153 CXX test/cpp_headers/ioat_spec.o 00:02:25.153 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.153 CC examples/blob/cli/blobcli.o 00:02:25.153 CC test/event/app_repeat/app_repeat.o 00:02:25.153 CC app/fio/nvme/fio_plugin.o 00:02:25.153 CC test/blobfs/mkfs/mkfs.o 00:02:25.153 CC test/event/reactor_perf/reactor_perf.o 00:02:25.153 CC test/event/event_perf/event_perf.o 00:02:25.153 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.153 CC examples/util/zipf/zipf.o 00:02:25.153 CC test/nvme/fdp/fdp.o 00:02:25.153 CC test/app/jsoncat/jsoncat.o 00:02:25.153 CC test/nvme/cuse/cuse.o 00:02:25.153 CC test/nvme/sgl/sgl.o 00:02:25.153 CC app/fio/bdev/fio_plugin.o 00:02:25.153 CC test/nvme/err_injection/err_injection.o 00:02:25.153 CC test/app/bdev_svc/bdev_svc.o 00:02:25.153 CC test/event/scheduler/scheduler.o 00:02:25.153 CC test/dma/test_dma/test_dma.o 00:02:25.153 CC test/accel/dif/dif.o 00:02:25.414 CC test/lvol/esnap/esnap.o 00:02:25.414 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.414 LINK spdk_lspci 00:02:25.414 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.414 LINK interrupt_tgt 00:02:25.414 LINK spdk_tgt 00:02:25.414 LINK rpc_client_test 00:02:25.414 LINK nvmf_tgt 00:02:25.414 LINK spdk_nvme_discover 00:02:25.414 LINK vhost 00:02:25.414 LINK poller_perf 00:02:25.414 LINK histogram_perf 00:02:25.414 LINK event_perf 00:02:25.414 LINK spdk_trace_record 00:02:25.414 LINK env_dpdk_post_init 00:02:25.414 LINK reactor_perf 00:02:25.677 LINK lsvmd 00:02:25.677 LINK iscsi_tgt 00:02:25.677 LINK startup 00:02:25.677 LINK zipf 00:02:25.677 LINK connect_stress 00:02:25.677 LINK led 00:02:25.677 LINK pmr_persistence 00:02:25.677 LINK reactor 00:02:25.677 LINK doorbell_aers 00:02:25.677 LINK bdev_svc 00:02:25.677 CXX test/cpp_headers/iscsi_spec.o 00:02:25.677 LINK cmb_copy 00:02:25.677 LINK jsoncat 00:02:25.677 CXX test/cpp_headers/json.o 00:02:25.677 LINK ioat_perf 00:02:25.677 CXX test/cpp_headers/jsonrpc.o 00:02:25.677 CXX test/cpp_headers/likely.o 00:02:25.677 CXX test/cpp_headers/log.o 00:02:25.677 CXX test/cpp_headers/lvol.o 00:02:25.677 LINK vtophys 00:02:25.677 LINK boot_partition 00:02:25.677 LINK hello_blob 00:02:25.677 CXX test/cpp_headers/memory.o 00:02:25.677 LINK app_repeat 00:02:25.677 CXX test/cpp_headers/mmio.o 00:02:25.677 CXX test/cpp_headers/nbd.o 00:02:25.677 LINK hotplug 00:02:25.677 CXX test/cpp_headers/notify.o 00:02:25.677 CXX test/cpp_headers/nvme.o 00:02:25.677 CXX test/cpp_headers/nvme_intel.o 00:02:25.677 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.677 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.677 LINK hello_sock 00:02:25.677 LINK stub 00:02:25.677 CXX test/cpp_headers/nvme_spec.o 00:02:25.677 CXX test/cpp_headers/nvme_zns.o 00:02:25.677 LINK mkfs 00:02:25.677 LINK hello_bdev 00:02:25.677 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.677 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.677 LINK spdk_dd 00:02:25.677 CXX test/cpp_headers/nvmf.o 00:02:25.677 LINK verify 00:02:25.677 CXX test/cpp_headers/nvmf_spec.o 00:02:25.677 LINK fused_ordering 00:02:25.678 CXX test/cpp_headers/nvmf_transport.o 00:02:25.678 LINK reserve 00:02:25.678 LINK hello_world 00:02:25.678 LINK err_injection 00:02:25.678 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.678 CXX test/cpp_headers/opal.o 00:02:25.678 CXX test/cpp_headers/opal_spec.o 00:02:25.678 CXX test/cpp_headers/pci_ids.o 00:02:25.678 LINK simple_copy 00:02:25.678 CXX test/cpp_headers/pipe.o 00:02:25.678 CXX test/cpp_headers/queue.o 00:02:25.678 CXX test/cpp_headers/reduce.o 00:02:25.678 CXX test/cpp_headers/rpc.o 00:02:25.678 CXX test/cpp_headers/scheduler.o 00:02:25.678 LINK nvme_dp 00:02:25.678 LINK aer 00:02:25.678 CXX test/cpp_headers/scsi.o 00:02:25.678 CXX test/cpp_headers/scsi_spec.o 00:02:25.678 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.678 CXX test/cpp_headers/sock.o 00:02:25.678 LINK nvmf 00:02:25.678 CXX test/cpp_headers/stdinc.o 00:02:25.678 CXX test/cpp_headers/string.o 00:02:25.678 CXX test/cpp_headers/thread.o 00:02:25.678 LINK scheduler 00:02:25.678 LINK thread 00:02:25.678 LINK reset 00:02:25.678 LINK idxd_perf 00:02:25.678 CXX test/cpp_headers/trace.o 00:02:25.678 CXX test/cpp_headers/trace_parser.o 00:02:25.942 LINK arbitration 00:02:25.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.942 LINK sgl 00:02:25.942 CXX test/cpp_headers/tree.o 00:02:25.942 LINK reconnect 00:02:25.942 LINK abort 00:02:25.942 LINK overhead 00:02:25.942 LINK spdk_trace 00:02:25.942 CXX test/cpp_headers/ublk.o 00:02:25.942 CXX test/cpp_headers/util.o 00:02:25.942 CXX test/cpp_headers/uuid.o 00:02:25.942 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.942 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.942 CXX test/cpp_headers/version.o 00:02:25.942 LINK nvme_compliance 00:02:25.942 CXX test/cpp_headers/vhost.o 00:02:25.942 CXX test/cpp_headers/vmd.o 00:02:25.942 LINK fdp 00:02:25.942 CXX test/cpp_headers/xor.o 00:02:25.942 CXX test/cpp_headers/zipf.o 00:02:25.942 LINK accel_perf 00:02:25.942 LINK bdevio 00:02:25.942 LINK nvme_manage 00:02:25.942 LINK dif 00:02:25.942 LINK test_dma 00:02:25.942 LINK blobcli 00:02:25.942 LINK pci_ut 00:02:25.942 LINK nvme_fuzz 00:02:26.200 LINK spdk_bdev 00:02:26.200 LINK spdk_nvme 00:02:26.200 LINK spdk_top 00:02:26.200 LINK mem_callbacks 00:02:26.200 LINK spdk_nvme_perf 00:02:26.200 LINK spdk_nvme_identify 00:02:26.459 LINK vhost_fuzz 00:02:26.459 LINK bdevperf 00:02:26.459 LINK memory_ut 00:02:26.459 LINK cuse 00:02:27.026 LINK iscsi_fuzz 00:02:28.403 LINK esnap 00:02:28.661 00:02:28.661 real 0m38.545s 00:02:28.661 user 5m41.848s 00:02:28.661 sys 3m14.975s 00:02:28.661 10:56:49 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:28.661 10:56:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.661 ************************************ 00:02:28.661 END TEST make 00:02:28.661 ************************************ 00:02:28.920 10:56:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:28.920 10:56:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:28.920 10:56:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:28.920 10:56:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:28.920 10:56:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:28.920 10:56:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:28.920 10:56:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:28.920 10:56:49 -- scripts/common.sh@335 -- # IFS=.-: 00:02:28.920 10:56:49 -- scripts/common.sh@335 -- # read -ra ver1 00:02:28.920 10:56:49 -- scripts/common.sh@336 -- # IFS=.-: 00:02:28.920 10:56:49 -- scripts/common.sh@336 -- # read -ra ver2 00:02:28.920 10:56:49 -- scripts/common.sh@337 -- # local 'op=<' 00:02:28.920 10:56:49 -- scripts/common.sh@339 -- # ver1_l=2 00:02:28.920 10:56:49 -- scripts/common.sh@340 -- # ver2_l=1 00:02:28.920 10:56:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:28.920 10:56:49 -- scripts/common.sh@343 -- # case "$op" in 00:02:28.920 10:56:49 -- scripts/common.sh@344 -- # : 1 00:02:28.920 10:56:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:28.920 10:56:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.920 10:56:49 -- scripts/common.sh@364 -- # decimal 1 00:02:28.920 10:56:49 -- scripts/common.sh@352 -- # local d=1 00:02:28.920 10:56:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:28.920 10:56:49 -- scripts/common.sh@354 -- # echo 1 00:02:28.920 10:56:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:28.920 10:56:49 -- scripts/common.sh@365 -- # decimal 2 00:02:28.920 10:56:49 -- scripts/common.sh@352 -- # local d=2 00:02:28.920 10:56:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:28.920 10:56:49 -- scripts/common.sh@354 -- # echo 2 00:02:28.920 10:56:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:28.920 10:56:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:28.920 10:56:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:28.920 10:56:49 -- scripts/common.sh@367 -- # return 0 00:02:28.920 10:56:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:28.920 10:56:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:28.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.920 --rc genhtml_branch_coverage=1 00:02:28.920 --rc genhtml_function_coverage=1 00:02:28.920 --rc genhtml_legend=1 00:02:28.920 --rc geninfo_all_blocks=1 00:02:28.920 --rc geninfo_unexecuted_blocks=1 00:02:28.920 00:02:28.920 ' 00:02:28.920 10:56:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:28.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.920 --rc genhtml_branch_coverage=1 00:02:28.920 --rc genhtml_function_coverage=1 00:02:28.920 --rc genhtml_legend=1 00:02:28.920 --rc geninfo_all_blocks=1 00:02:28.920 --rc geninfo_unexecuted_blocks=1 00:02:28.920 00:02:28.920 ' 00:02:28.920 10:56:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:28.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.920 --rc genhtml_branch_coverage=1 00:02:28.920 --rc genhtml_function_coverage=1 00:02:28.920 --rc genhtml_legend=1 00:02:28.920 --rc geninfo_all_blocks=1 00:02:28.920 --rc geninfo_unexecuted_blocks=1 00:02:28.920 00:02:28.920 ' 00:02:28.920 10:56:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:28.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.920 --rc genhtml_branch_coverage=1 00:02:28.920 --rc genhtml_function_coverage=1 00:02:28.920 --rc genhtml_legend=1 00:02:28.920 --rc geninfo_all_blocks=1 00:02:28.920 --rc geninfo_unexecuted_blocks=1 00:02:28.920 00:02:28.920 ' 00:02:28.920 10:56:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:28.920 10:56:49 -- nvmf/common.sh@7 -- # uname -s 00:02:28.920 10:56:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.920 10:56:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.920 10:56:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.920 10:56:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.921 10:56:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.921 10:56:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.921 10:56:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.921 10:56:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.921 10:56:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.921 10:56:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.921 10:56:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:02:28.921 10:56:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:02:28.921 10:56:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.921 10:56:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.921 10:56:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:28.921 10:56:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:28.921 10:56:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.921 10:56:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.921 10:56:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.921 10:56:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.921 10:56:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.921 10:56:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.921 10:56:49 -- paths/export.sh@5 -- # export PATH 00:02:28.921 10:56:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.921 10:56:49 -- nvmf/common.sh@46 -- # : 0 00:02:28.921 10:56:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:28.921 10:56:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:28.921 10:56:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:28.921 10:56:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.921 10:56:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.921 10:56:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:28.921 10:56:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:28.921 10:56:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:28.921 10:56:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.921 10:56:49 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.921 10:56:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.921 10:56:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.921 10:56:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.921 10:56:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.921 10:56:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.921 10:56:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.921 10:56:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.921 10:56:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.921 10:56:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.921 10:56:49 -- spdk/autotest.sh@48 -- # udevadm_pid=1382149 00:02:28.921 10:56:49 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:28.921 10:56:49 -- spdk/autotest.sh@54 -- # echo 1382151 00:02:28.921 10:56:49 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:28.921 10:56:49 -- spdk/autotest.sh@56 -- # echo 1382152 00:02:28.921 10:56:49 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:28.921 10:56:49 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:28.921 10:56:49 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:28.921 10:56:49 -- spdk/autotest.sh@60 -- # echo 1382153 00:02:28.921 10:56:49 -- spdk/autotest.sh@62 -- # echo 1382154 00:02:28.921 10:56:49 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.921 10:56:49 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:28.921 10:56:49 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:28.921 10:56:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.921 10:56:49 -- common/autotest_common.sh@10 -- # set +x 00:02:28.921 10:56:49 -- spdk/autotest.sh@70 -- # create_test_list 00:02:28.921 10:56:49 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:28.921 10:56:49 -- common/autotest_common.sh@10 -- # set +x 00:02:28.921 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:28.921 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:28.921 10:56:49 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:28.921 10:56:49 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.921 10:56:49 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.921 10:56:49 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:28.921 10:56:49 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:28.921 10:56:49 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:28.921 10:56:49 -- common/autotest_common.sh@1450 -- # uname 00:02:28.921 10:56:49 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:28.921 10:56:49 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:28.921 10:56:49 -- common/autotest_common.sh@1470 -- # uname 00:02:28.921 10:56:49 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:28.921 10:56:49 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:28.921 10:56:49 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:29.180 lcov: LCOV version 1.15 00:02:29.180 10:56:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:31.713 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:31.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:31.713 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:31.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:31.713 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:31.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:49.806 10:57:09 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:49.806 10:57:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:49.806 10:57:09 -- common/autotest_common.sh@10 -- # set +x 00:02:49.806 10:57:09 -- spdk/autotest.sh@89 -- # rm -f 00:02:49.806 10:57:09 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.341 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:52.341 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:53.718 10:57:14 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:53.718 10:57:14 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:53.718 10:57:14 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:53.718 10:57:14 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:53.718 10:57:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:53.718 10:57:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:53.718 10:57:14 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:53.718 10:57:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.718 10:57:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:53.718 10:57:14 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:53.718 10:57:14 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:02:53.718 10:57:14 -- spdk/autotest.sh@108 -- # grep -v p 00:02:53.718 10:57:14 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:53.718 10:57:14 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:53.718 10:57:14 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:53.718 10:57:14 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:53.718 10:57:14 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.718 No valid GPT data, bailing 00:02:53.718 10:57:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.718 10:57:14 -- scripts/common.sh@393 -- # pt= 00:02:53.718 10:57:14 -- scripts/common.sh@394 -- # return 1 00:02:53.718 10:57:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.718 1+0 records in 00:02:53.718 1+0 records out 00:02:53.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610073 s, 172 MB/s 00:02:53.718 10:57:14 -- spdk/autotest.sh@116 -- # sync 00:02:53.718 10:57:14 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.718 10:57:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.718 10:57:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:58.993 10:57:19 -- spdk/autotest.sh@122 -- # uname -s 00:02:58.993 10:57:19 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:02:58.993 10:57:19 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.993 10:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.993 10:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.993 10:57:19 -- common/autotest_common.sh@10 -- # set +x 00:02:58.993 ************************************ 00:02:58.993 START TEST setup.sh 00:02:58.993 ************************************ 00:02:58.993 10:57:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:58.993 * Looking for test storage... 00:02:58.993 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:58.993 10:57:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:58.993 10:57:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:58.993 10:57:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:58.993 10:57:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:58.993 10:57:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:58.993 10:57:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:58.993 10:57:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:58.993 10:57:19 -- scripts/common.sh@335 -- # IFS=.-: 00:02:58.993 10:57:19 -- scripts/common.sh@335 -- # read -ra ver1 00:02:58.993 10:57:19 -- scripts/common.sh@336 -- # IFS=.-: 00:02:58.993 10:57:19 -- scripts/common.sh@336 -- # read -ra ver2 00:02:58.993 10:57:19 -- scripts/common.sh@337 -- # local 'op=<' 00:02:58.993 10:57:19 -- scripts/common.sh@339 -- # ver1_l=2 00:02:58.993 10:57:19 -- scripts/common.sh@340 -- # ver2_l=1 00:02:58.993 10:57:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:58.993 10:57:19 -- scripts/common.sh@343 -- # case "$op" in 00:02:58.993 10:57:19 -- scripts/common.sh@344 -- # : 1 00:02:58.993 10:57:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:58.993 10:57:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.993 10:57:19 -- scripts/common.sh@364 -- # decimal 1 00:02:58.993 10:57:19 -- scripts/common.sh@352 -- # local d=1 00:02:58.993 10:57:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:58.993 10:57:19 -- scripts/common.sh@354 -- # echo 1 00:02:58.993 10:57:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:58.993 10:57:19 -- scripts/common.sh@365 -- # decimal 2 00:02:58.993 10:57:19 -- scripts/common.sh@352 -- # local d=2 00:02:58.993 10:57:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:58.993 10:57:19 -- scripts/common.sh@354 -- # echo 2 00:02:58.993 10:57:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:58.993 10:57:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:58.993 10:57:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:58.993 10:57:19 -- scripts/common.sh@367 -- # return 0 00:02:58.993 10:57:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:58.993 10:57:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.993 --rc genhtml_branch_coverage=1 00:02:58.993 --rc genhtml_function_coverage=1 00:02:58.993 --rc genhtml_legend=1 00:02:58.993 --rc geninfo_all_blocks=1 00:02:58.993 --rc geninfo_unexecuted_blocks=1 00:02:58.993 00:02:58.993 ' 00:02:58.993 10:57:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.993 --rc genhtml_branch_coverage=1 00:02:58.993 --rc genhtml_function_coverage=1 00:02:58.993 --rc genhtml_legend=1 00:02:58.993 --rc geninfo_all_blocks=1 00:02:58.993 --rc geninfo_unexecuted_blocks=1 00:02:58.993 00:02:58.993 ' 00:02:58.993 10:57:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.993 --rc genhtml_branch_coverage=1 00:02:58.993 --rc genhtml_function_coverage=1 00:02:58.993 --rc genhtml_legend=1 00:02:58.993 --rc geninfo_all_blocks=1 00:02:58.993 --rc geninfo_unexecuted_blocks=1 00:02:58.993 00:02:58.993 ' 00:02:58.993 10:57:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.993 --rc genhtml_branch_coverage=1 00:02:58.994 --rc genhtml_function_coverage=1 00:02:58.994 --rc genhtml_legend=1 00:02:58.994 --rc geninfo_all_blocks=1 00:02:58.994 --rc geninfo_unexecuted_blocks=1 00:02:58.994 00:02:58.994 ' 00:02:58.994 10:57:19 -- setup/test-setup.sh@10 -- # uname -s 00:02:58.994 10:57:19 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:58.994 10:57:19 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:58.994 10:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.994 10:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.994 10:57:19 -- common/autotest_common.sh@10 -- # set +x 00:02:58.994 ************************************ 00:02:58.994 START TEST acl 00:02:58.994 ************************************ 00:02:58.994 10:57:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:59.253 * Looking for test storage... 00:02:59.253 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:59.253 10:57:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:59.253 10:57:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:59.253 10:57:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:59.253 10:57:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:59.253 10:57:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:59.253 10:57:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:59.253 10:57:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:59.253 10:57:19 -- scripts/common.sh@335 -- # IFS=.-: 00:02:59.253 10:57:19 -- scripts/common.sh@335 -- # read -ra ver1 00:02:59.253 10:57:19 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.253 10:57:19 -- scripts/common.sh@336 -- # read -ra ver2 00:02:59.253 10:57:19 -- scripts/common.sh@337 -- # local 'op=<' 00:02:59.253 10:57:19 -- scripts/common.sh@339 -- # ver1_l=2 00:02:59.253 10:57:19 -- scripts/common.sh@340 -- # ver2_l=1 00:02:59.253 10:57:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:59.253 10:57:19 -- scripts/common.sh@343 -- # case "$op" in 00:02:59.253 10:57:19 -- scripts/common.sh@344 -- # : 1 00:02:59.253 10:57:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:59.253 10:57:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.253 10:57:19 -- scripts/common.sh@364 -- # decimal 1 00:02:59.253 10:57:19 -- scripts/common.sh@352 -- # local d=1 00:02:59.253 10:57:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.253 10:57:19 -- scripts/common.sh@354 -- # echo 1 00:02:59.253 10:57:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:59.253 10:57:19 -- scripts/common.sh@365 -- # decimal 2 00:02:59.253 10:57:19 -- scripts/common.sh@352 -- # local d=2 00:02:59.253 10:57:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.253 10:57:19 -- scripts/common.sh@354 -- # echo 2 00:02:59.253 10:57:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:59.253 10:57:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:59.253 10:57:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:59.253 10:57:19 -- scripts/common.sh@367 -- # return 0 00:02:59.253 10:57:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.253 10:57:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:59.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.253 --rc genhtml_branch_coverage=1 00:02:59.253 --rc genhtml_function_coverage=1 00:02:59.253 --rc genhtml_legend=1 00:02:59.253 --rc geninfo_all_blocks=1 00:02:59.253 --rc geninfo_unexecuted_blocks=1 00:02:59.253 00:02:59.253 ' 00:02:59.253 10:57:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:59.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.253 --rc genhtml_branch_coverage=1 00:02:59.253 --rc genhtml_function_coverage=1 00:02:59.253 --rc genhtml_legend=1 00:02:59.253 --rc geninfo_all_blocks=1 00:02:59.253 --rc geninfo_unexecuted_blocks=1 00:02:59.253 00:02:59.253 ' 00:02:59.253 10:57:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:59.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.253 --rc genhtml_branch_coverage=1 00:02:59.253 --rc genhtml_function_coverage=1 00:02:59.253 --rc genhtml_legend=1 00:02:59.253 --rc geninfo_all_blocks=1 00:02:59.253 --rc geninfo_unexecuted_blocks=1 00:02:59.253 00:02:59.253 ' 00:02:59.253 10:57:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:59.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.253 --rc genhtml_branch_coverage=1 00:02:59.253 --rc genhtml_function_coverage=1 00:02:59.253 --rc genhtml_legend=1 00:02:59.253 --rc geninfo_all_blocks=1 00:02:59.253 --rc geninfo_unexecuted_blocks=1 00:02:59.253 00:02:59.253 ' 00:02:59.253 10:57:19 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:59.253 10:57:19 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:59.253 10:57:19 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:59.253 10:57:19 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:59.253 10:57:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:59.253 10:57:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:59.253 10:57:19 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:59.253 10:57:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.253 10:57:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:59.253 10:57:19 -- setup/acl.sh@12 -- # devs=() 00:02:59.253 10:57:19 -- setup/acl.sh@12 -- # declare -a devs 00:02:59.253 10:57:19 -- setup/acl.sh@13 -- # drivers=() 00:02:59.253 10:57:19 -- setup/acl.sh@13 -- # declare -A drivers 00:02:59.253 10:57:19 -- setup/acl.sh@51 -- # setup reset 00:02:59.253 10:57:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.253 10:57:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.522 10:57:24 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.522 10:57:24 -- setup/acl.sh@16 -- # local dev driver 00:03:04.522 10:57:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.522 10:57:24 -- setup/acl.sh@15 -- # setup output status 00:03:04.522 10:57:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.522 10:57:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:06.426 Hugepages 00:03:06.426 node hugesize free / total 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 00:03:06.426 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # continue 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:06.426 10:57:26 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:06.426 10:57:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:06.426 10:57:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:06.426 10:57:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.426 10:57:26 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:06.426 10:57:26 -- setup/acl.sh@54 -- # run_test denied denied 00:03:06.426 10:57:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.426 10:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.426 10:57:26 -- common/autotest_common.sh@10 -- # set +x 00:03:06.426 ************************************ 00:03:06.426 START TEST denied 00:03:06.426 ************************************ 00:03:06.426 10:57:26 -- common/autotest_common.sh@1114 -- # denied 00:03:06.426 10:57:26 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:06.427 10:57:26 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:06.427 10:57:26 -- setup/acl.sh@38 -- # setup output config 00:03:06.427 10:57:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.427 10:57:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:10.619 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:10.619 10:57:30 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:10.619 10:57:30 -- setup/acl.sh@28 -- # local dev driver 00:03:10.619 10:57:30 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.619 10:57:30 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:10.619 10:57:30 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:10.619 10:57:30 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.619 10:57:30 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.619 10:57:30 -- setup/acl.sh@41 -- # setup reset 00:03:10.619 10:57:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.619 10:57:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.890 00:03:15.890 real 0m9.047s 00:03:15.890 user 0m2.715s 00:03:15.890 sys 0m5.446s 00:03:15.890 10:57:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:15.890 10:57:35 -- common/autotest_common.sh@10 -- # set +x 00:03:15.890 ************************************ 00:03:15.890 END TEST denied 00:03:15.890 ************************************ 00:03:15.890 10:57:35 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:15.890 10:57:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.890 10:57:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.890 10:57:35 -- common/autotest_common.sh@10 -- # set +x 00:03:15.890 ************************************ 00:03:15.890 START TEST allowed 00:03:15.890 ************************************ 00:03:15.890 10:57:35 -- common/autotest_common.sh@1114 -- # allowed 00:03:15.890 10:57:35 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:15.890 10:57:35 -- setup/acl.sh@45 -- # setup output config 00:03:15.890 10:57:35 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:15.890 10:57:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.890 10:57:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:24.014 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:24.014 10:57:43 -- setup/acl.sh@47 -- # verify 00:03:24.014 10:57:43 -- setup/acl.sh@28 -- # local dev driver 00:03:24.014 10:57:43 -- setup/acl.sh@48 -- # setup reset 00:03:24.014 10:57:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.014 10:57:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.206 00:03:28.206 real 0m12.040s 00:03:28.206 user 0m3.138s 00:03:28.206 sys 0m5.650s 00:03:28.206 10:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:28.206 10:57:47 -- common/autotest_common.sh@10 -- # set +x 00:03:28.206 ************************************ 00:03:28.206 END TEST allowed 00:03:28.206 ************************************ 00:03:28.206 00:03:28.206 real 0m28.462s 00:03:28.206 user 0m8.546s 00:03:28.206 sys 0m15.939s 00:03:28.206 10:57:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:28.206 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:28.206 ************************************ 00:03:28.206 END TEST acl 00:03:28.206 ************************************ 00:03:28.206 10:57:48 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:28.206 10:57:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.206 10:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.206 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:28.206 ************************************ 00:03:28.206 START TEST hugepages 00:03:28.206 ************************************ 00:03:28.206 10:57:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:28.206 * Looking for test storage... 00:03:28.206 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:28.206 10:57:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:28.206 10:57:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:28.206 10:57:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:28.206 10:57:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:28.206 10:57:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:28.206 10:57:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:28.206 10:57:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:28.206 10:57:48 -- scripts/common.sh@335 -- # IFS=.-: 00:03:28.206 10:57:48 -- scripts/common.sh@335 -- # read -ra ver1 00:03:28.206 10:57:48 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.206 10:57:48 -- scripts/common.sh@336 -- # read -ra ver2 00:03:28.206 10:57:48 -- scripts/common.sh@337 -- # local 'op=<' 00:03:28.206 10:57:48 -- scripts/common.sh@339 -- # ver1_l=2 00:03:28.206 10:57:48 -- scripts/common.sh@340 -- # ver2_l=1 00:03:28.206 10:57:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:28.206 10:57:48 -- scripts/common.sh@343 -- # case "$op" in 00:03:28.206 10:57:48 -- scripts/common.sh@344 -- # : 1 00:03:28.206 10:57:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:28.206 10:57:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.206 10:57:48 -- scripts/common.sh@364 -- # decimal 1 00:03:28.206 10:57:48 -- scripts/common.sh@352 -- # local d=1 00:03:28.206 10:57:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.206 10:57:48 -- scripts/common.sh@354 -- # echo 1 00:03:28.206 10:57:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:28.206 10:57:48 -- scripts/common.sh@365 -- # decimal 2 00:03:28.206 10:57:48 -- scripts/common.sh@352 -- # local d=2 00:03:28.206 10:57:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.206 10:57:48 -- scripts/common.sh@354 -- # echo 2 00:03:28.206 10:57:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:28.206 10:57:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:28.206 10:57:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:28.206 10:57:48 -- scripts/common.sh@367 -- # return 0 00:03:28.206 10:57:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.206 10:57:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.206 --rc genhtml_branch_coverage=1 00:03:28.206 --rc genhtml_function_coverage=1 00:03:28.206 --rc genhtml_legend=1 00:03:28.206 --rc geninfo_all_blocks=1 00:03:28.206 --rc geninfo_unexecuted_blocks=1 00:03:28.206 00:03:28.206 ' 00:03:28.206 10:57:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.206 --rc genhtml_branch_coverage=1 00:03:28.206 --rc genhtml_function_coverage=1 00:03:28.206 --rc genhtml_legend=1 00:03:28.206 --rc geninfo_all_blocks=1 00:03:28.206 --rc geninfo_unexecuted_blocks=1 00:03:28.206 00:03:28.206 ' 00:03:28.206 10:57:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.206 --rc genhtml_branch_coverage=1 00:03:28.206 --rc genhtml_function_coverage=1 00:03:28.206 --rc genhtml_legend=1 00:03:28.206 --rc geninfo_all_blocks=1 00:03:28.206 --rc geninfo_unexecuted_blocks=1 00:03:28.206 00:03:28.206 ' 00:03:28.206 10:57:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.206 --rc genhtml_branch_coverage=1 00:03:28.206 --rc genhtml_function_coverage=1 00:03:28.206 --rc genhtml_legend=1 00:03:28.206 --rc geninfo_all_blocks=1 00:03:28.206 --rc geninfo_unexecuted_blocks=1 00:03:28.206 00:03:28.206 ' 00:03:28.206 10:57:48 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:28.206 10:57:48 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:28.206 10:57:48 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:28.206 10:57:48 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:28.206 10:57:48 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:28.206 10:57:48 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:28.206 10:57:48 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:28.206 10:57:48 -- setup/common.sh@18 -- # local node= 00:03:28.206 10:57:48 -- setup/common.sh@19 -- # local var val 00:03:28.206 10:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.206 10:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.206 10:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.206 10:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.206 10:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.206 10:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.206 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.206 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 53042512 kB' 'MemAvailable: 58074760 kB' 'Buffers: 2708 kB' 'Cached: 15845804 kB' 'SwapCached: 0 kB' 'Active: 12420704 kB' 'Inactive: 4039544 kB' 'Active(anon): 11238692 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614996 kB' 'Mapped: 200192 kB' 'Shmem: 10626956 kB' 'KReclaimable: 524344 kB' 'Slab: 1484284 kB' 'SReclaimable: 524344 kB' 'SUnreclaim: 959940 kB' 'KernelStack: 22768 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 44685784 kB' 'Committed_AS: 12575556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220860 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.207 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.207 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # continue 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.208 10:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.208 10:57:48 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.208 10:57:48 -- setup/common.sh@33 -- # echo 2048 00:03:28.208 10:57:48 -- setup/common.sh@33 -- # return 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:28.208 10:57:48 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:28.208 10:57:48 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:28.208 10:57:48 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:28.208 10:57:48 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:28.208 10:57:48 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:28.208 10:57:48 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:28.208 10:57:48 -- setup/hugepages.sh@207 -- # get_nodes 00:03:28.208 10:57:48 -- setup/hugepages.sh@27 -- # local node 00:03:28.208 10:57:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.208 10:57:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:28.208 10:57:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.208 10:57:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.208 10:57:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.208 10:57:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.208 10:57:48 -- setup/hugepages.sh@208 -- # clear_hp 00:03:28.208 10:57:48 -- setup/hugepages.sh@37 -- # local node hp 00:03:28.208 10:57:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.208 10:57:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.208 10:57:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.208 10:57:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.208 10:57:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.208 10:57:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.208 10:57:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:28.208 10:57:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:28.208 10:57:48 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:28.208 10:57:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.208 10:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.208 10:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:28.208 ************************************ 00:03:28.208 START TEST default_setup 00:03:28.208 ************************************ 00:03:28.208 10:57:48 -- common/autotest_common.sh@1114 -- # default_setup 00:03:28.208 10:57:48 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.208 10:57:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.208 10:57:48 -- setup/hugepages.sh@51 -- # shift 00:03:28.208 10:57:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.208 10:57:48 -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.208 10:57:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.208 10:57:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.208 10:57:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.208 10:57:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.208 10:57:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.208 10:57:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.208 10:57:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.208 10:57:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.208 10:57:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.208 10:57:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.208 10:57:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.208 10:57:48 -- setup/hugepages.sh@73 -- # return 0 00:03:28.208 10:57:48 -- setup/hugepages.sh@137 -- # setup output 00:03:28.208 10:57:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.208 10:57:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:30.741 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:30.741 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.030 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.408 10:57:55 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:35.408 10:57:55 -- setup/hugepages.sh@89 -- # local node 00:03:35.408 10:57:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.408 10:57:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.408 10:57:55 -- setup/hugepages.sh@92 -- # local surp 00:03:35.408 10:57:55 -- setup/hugepages.sh@93 -- # local resv 00:03:35.408 10:57:55 -- setup/hugepages.sh@94 -- # local anon 00:03:35.408 10:57:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.408 10:57:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.408 10:57:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.408 10:57:55 -- setup/common.sh@18 -- # local node= 00:03:35.408 10:57:55 -- setup/common.sh@19 -- # local var val 00:03:35.408 10:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.408 10:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.408 10:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.408 10:57:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.408 10:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.408 10:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55216368 kB' 'MemAvailable: 60248328 kB' 'Buffers: 2708 kB' 'Cached: 15845980 kB' 'SwapCached: 0 kB' 'Active: 12424776 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242764 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619124 kB' 'Mapped: 200136 kB' 'Shmem: 10627132 kB' 'KReclaimable: 524056 kB' 'Slab: 1482476 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958420 kB' 'KernelStack: 23008 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12583940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220844 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.408 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.408 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.670 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.670 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.671 10:57:55 -- setup/common.sh@33 -- # echo 0 00:03:35.671 10:57:55 -- setup/common.sh@33 -- # return 0 00:03:35.671 10:57:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.671 10:57:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.671 10:57:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.671 10:57:55 -- setup/common.sh@18 -- # local node= 00:03:35.671 10:57:55 -- setup/common.sh@19 -- # local var val 00:03:35.671 10:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.671 10:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.671 10:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.671 10:57:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.671 10:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.671 10:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55218940 kB' 'MemAvailable: 60250900 kB' 'Buffers: 2708 kB' 'Cached: 15845984 kB' 'SwapCached: 0 kB' 'Active: 12425048 kB' 'Inactive: 4039544 kB' 'Active(anon): 11243036 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619468 kB' 'Mapped: 200072 kB' 'Shmem: 10627136 kB' 'KReclaimable: 524056 kB' 'Slab: 1482480 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958424 kB' 'KernelStack: 22896 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12581544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220796 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:55 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.671 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.671 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.672 10:57:56 -- setup/common.sh@33 -- # echo 0 00:03:35.672 10:57:56 -- setup/common.sh@33 -- # return 0 00:03:35.672 10:57:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.672 10:57:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.672 10:57:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.672 10:57:56 -- setup/common.sh@18 -- # local node= 00:03:35.672 10:57:56 -- setup/common.sh@19 -- # local var val 00:03:35.672 10:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.672 10:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.672 10:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.672 10:57:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.672 10:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.672 10:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55219268 kB' 'MemAvailable: 60251228 kB' 'Buffers: 2708 kB' 'Cached: 15845996 kB' 'SwapCached: 0 kB' 'Active: 12425288 kB' 'Inactive: 4039544 kB' 'Active(anon): 11243276 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619668 kB' 'Mapped: 200072 kB' 'Shmem: 10627148 kB' 'KReclaimable: 524056 kB' 'Slab: 1482448 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958392 kB' 'KernelStack: 23104 kB' 'PageTables: 9492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12583068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220956 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.672 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.672 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.673 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.673 10:57:56 -- setup/common.sh@33 -- # echo 0 00:03:35.673 10:57:56 -- setup/common.sh@33 -- # return 0 00:03:35.673 10:57:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.673 10:57:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.673 nr_hugepages=1024 00:03:35.673 10:57:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.673 resv_hugepages=0 00:03:35.673 10:57:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.673 surplus_hugepages=0 00:03:35.673 10:57:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.673 anon_hugepages=0 00:03:35.673 10:57:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.673 10:57:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.673 10:57:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.673 10:57:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.673 10:57:56 -- setup/common.sh@18 -- # local node= 00:03:35.673 10:57:56 -- setup/common.sh@19 -- # local var val 00:03:35.673 10:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.673 10:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.673 10:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.673 10:57:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.673 10:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.673 10:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.673 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55219928 kB' 'MemAvailable: 60251888 kB' 'Buffers: 2708 kB' 'Cached: 15846008 kB' 'SwapCached: 0 kB' 'Active: 12425256 kB' 'Inactive: 4039544 kB' 'Active(anon): 11243244 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 619584 kB' 'Mapped: 200072 kB' 'Shmem: 10627160 kB' 'KReclaimable: 524056 kB' 'Slab: 1482448 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958392 kB' 'KernelStack: 23056 kB' 'PageTables: 9644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12582716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220972 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.674 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.674 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.675 10:57:56 -- setup/common.sh@33 -- # echo 1024 00:03:35.675 10:57:56 -- setup/common.sh@33 -- # return 0 00:03:35.675 10:57:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.675 10:57:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.675 10:57:56 -- setup/hugepages.sh@27 -- # local node 00:03:35.675 10:57:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.675 10:57:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.675 10:57:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.675 10:57:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.675 10:57:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.675 10:57:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.675 10:57:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.675 10:57:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.675 10:57:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.675 10:57:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.675 10:57:56 -- setup/common.sh@18 -- # local node=0 00:03:35.675 10:57:56 -- setup/common.sh@19 -- # local var val 00:03:35.675 10:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.675 10:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.675 10:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.675 10:57:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.675 10:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.675 10:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 24094488 kB' 'MemUsed: 8529816 kB' 'SwapCached: 0 kB' 'Active: 3968392 kB' 'Inactive: 379308 kB' 'Active(anon): 3055396 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916128 kB' 'Mapped: 84536 kB' 'AnonPages: 434856 kB' 'Shmem: 2623824 kB' 'KernelStack: 12824 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 973080 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 599736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.675 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.675 10:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # continue 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.676 10:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.676 10:57:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.676 10:57:56 -- setup/common.sh@33 -- # echo 0 00:03:35.676 10:57:56 -- setup/common.sh@33 -- # return 0 00:03:35.676 10:57:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.676 10:57:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.676 10:57:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.676 10:57:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.676 10:57:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.676 node0=1024 expecting 1024 00:03:35.676 10:57:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.676 00:03:35.676 real 0m7.871s 00:03:35.676 user 0m1.770s 00:03:35.676 sys 0m2.772s 00:03:35.676 10:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.676 10:57:56 -- common/autotest_common.sh@10 -- # set +x 00:03:35.676 ************************************ 00:03:35.676 END TEST default_setup 00:03:35.676 ************************************ 00:03:35.676 10:57:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:35.676 10:57:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.676 10:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.676 10:57:56 -- common/autotest_common.sh@10 -- # set +x 00:03:35.676 ************************************ 00:03:35.676 START TEST per_node_1G_alloc 00:03:35.676 ************************************ 00:03:35.676 10:57:56 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:35.676 10:57:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:35.676 10:57:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:35.676 10:57:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.676 10:57:56 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:35.676 10:57:56 -- setup/hugepages.sh@51 -- # shift 00:03:35.676 10:57:56 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:35.676 10:57:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.676 10:57:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.676 10:57:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.676 10:57:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:35.676 10:57:56 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:35.676 10:57:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.676 10:57:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.676 10:57:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.676 10:57:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.676 10:57:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.676 10:57:56 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:35.676 10:57:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.676 10:57:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.676 10:57:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.676 10:57:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.676 10:57:56 -- setup/hugepages.sh@73 -- # return 0 00:03:35.676 10:57:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:35.676 10:57:56 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:35.676 10:57:56 -- setup/hugepages.sh@146 -- # setup output 00:03:35.676 10:57:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.676 10:57:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:38.211 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.211 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.471 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.852 10:58:00 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:39.852 10:58:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:39.852 10:58:00 -- setup/hugepages.sh@89 -- # local node 00:03:39.852 10:58:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.852 10:58:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.852 10:58:00 -- setup/hugepages.sh@92 -- # local surp 00:03:39.852 10:58:00 -- setup/hugepages.sh@93 -- # local resv 00:03:39.852 10:58:00 -- setup/hugepages.sh@94 -- # local anon 00:03:39.852 10:58:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.852 10:58:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.852 10:58:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.852 10:58:00 -- setup/common.sh@18 -- # local node= 00:03:39.852 10:58:00 -- setup/common.sh@19 -- # local var val 00:03:39.852 10:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.852 10:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.852 10:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.852 10:58:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.852 10:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.852 10:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.852 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.852 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55220812 kB' 'MemAvailable: 60252772 kB' 'Buffers: 2708 kB' 'Cached: 15846132 kB' 'SwapCached: 0 kB' 'Active: 12422656 kB' 'Inactive: 4039544 kB' 'Active(anon): 11240644 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616844 kB' 'Mapped: 199260 kB' 'Shmem: 10627284 kB' 'KReclaimable: 524056 kB' 'Slab: 1481976 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 957920 kB' 'KernelStack: 22832 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12570832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220876 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.853 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.853 10:58:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.854 10:58:00 -- setup/common.sh@33 -- # echo 0 00:03:39.854 10:58:00 -- setup/common.sh@33 -- # return 0 00:03:39.854 10:58:00 -- setup/hugepages.sh@97 -- # anon=0 00:03:39.854 10:58:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.854 10:58:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.854 10:58:00 -- setup/common.sh@18 -- # local node= 00:03:39.854 10:58:00 -- setup/common.sh@19 -- # local var val 00:03:39.854 10:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.854 10:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.854 10:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.854 10:58:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.854 10:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.854 10:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55221364 kB' 'MemAvailable: 60253324 kB' 'Buffers: 2708 kB' 'Cached: 15846132 kB' 'SwapCached: 0 kB' 'Active: 12422732 kB' 'Inactive: 4039544 kB' 'Active(anon): 11240720 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616920 kB' 'Mapped: 199244 kB' 'Shmem: 10627284 kB' 'KReclaimable: 524056 kB' 'Slab: 1482020 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 957964 kB' 'KernelStack: 22832 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12570844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220844 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.854 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.854 10:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.855 10:58:00 -- setup/common.sh@33 -- # echo 0 00:03:39.855 10:58:00 -- setup/common.sh@33 -- # return 0 00:03:39.855 10:58:00 -- setup/hugepages.sh@99 -- # surp=0 00:03:39.855 10:58:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.855 10:58:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.855 10:58:00 -- setup/common.sh@18 -- # local node= 00:03:39.855 10:58:00 -- setup/common.sh@19 -- # local var val 00:03:39.855 10:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.855 10:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.855 10:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.855 10:58:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.855 10:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.855 10:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.855 10:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55221252 kB' 'MemAvailable: 60253212 kB' 'Buffers: 2708 kB' 'Cached: 15846144 kB' 'SwapCached: 0 kB' 'Active: 12422608 kB' 'Inactive: 4039544 kB' 'Active(anon): 11240596 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616736 kB' 'Mapped: 199244 kB' 'Shmem: 10627296 kB' 'KReclaimable: 524056 kB' 'Slab: 1482020 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 957964 kB' 'KernelStack: 22832 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12570856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220844 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.855 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.855 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.856 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.856 10:58:00 -- setup/common.sh@33 -- # echo 0 00:03:39.856 10:58:00 -- setup/common.sh@33 -- # return 0 00:03:39.856 10:58:00 -- setup/hugepages.sh@100 -- # resv=0 00:03:39.856 10:58:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.856 nr_hugepages=1024 00:03:39.856 10:58:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.856 resv_hugepages=0 00:03:39.856 10:58:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.856 surplus_hugepages=0 00:03:39.856 10:58:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.856 anon_hugepages=0 00:03:39.856 10:58:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.856 10:58:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.856 10:58:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.856 10:58:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.856 10:58:00 -- setup/common.sh@18 -- # local node= 00:03:39.856 10:58:00 -- setup/common.sh@19 -- # local var val 00:03:39.856 10:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.856 10:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.856 10:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.856 10:58:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.856 10:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.856 10:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.856 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55221252 kB' 'MemAvailable: 60253212 kB' 'Buffers: 2708 kB' 'Cached: 15846160 kB' 'SwapCached: 0 kB' 'Active: 12422556 kB' 'Inactive: 4039544 kB' 'Active(anon): 11240544 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616720 kB' 'Mapped: 199244 kB' 'Shmem: 10627312 kB' 'KReclaimable: 524056 kB' 'Slab: 1482020 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 957964 kB' 'KernelStack: 22800 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12572012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220828 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.857 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.857 10:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.858 10:58:00 -- setup/common.sh@33 -- # echo 1024 00:03:39.858 10:58:00 -- setup/common.sh@33 -- # return 0 00:03:39.858 10:58:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.858 10:58:00 -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.858 10:58:00 -- setup/hugepages.sh@27 -- # local node 00:03:39.858 10:58:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.858 10:58:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.858 10:58:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.858 10:58:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.858 10:58:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.858 10:58:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.858 10:58:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.858 10:58:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.858 10:58:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.858 10:58:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.858 10:58:00 -- setup/common.sh@18 -- # local node=0 00:03:39.858 10:58:00 -- setup/common.sh@19 -- # local var val 00:03:39.858 10:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.858 10:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.858 10:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.858 10:58:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.858 10:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.858 10:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 25140580 kB' 'MemUsed: 7483724 kB' 'SwapCached: 0 kB' 'Active: 3966564 kB' 'Inactive: 379308 kB' 'Active(anon): 3053568 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916204 kB' 'Mapped: 83868 kB' 'AnonPages: 432900 kB' 'Shmem: 2623900 kB' 'KernelStack: 12776 kB' 'PageTables: 5444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 972828 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 599484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.858 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.858 10:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@33 -- # echo 0 00:03:39.859 10:58:00 -- setup/common.sh@33 -- # return 0 00:03:39.859 10:58:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.859 10:58:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.859 10:58:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.859 10:58:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.859 10:58:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.859 10:58:00 -- setup/common.sh@18 -- # local node=1 00:03:39.859 10:58:00 -- setup/common.sh@19 -- # local var val 00:03:39.859 10:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:39.859 10:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.859 10:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.859 10:58:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.859 10:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.859 10:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44164364 kB' 'MemFree: 30077776 kB' 'MemUsed: 14086588 kB' 'SwapCached: 0 kB' 'Active: 8456104 kB' 'Inactive: 3660236 kB' 'Active(anon): 8187088 kB' 'Inactive(anon): 0 kB' 'Active(file): 269016 kB' 'Inactive(file): 3660236 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11932680 kB' 'Mapped: 115376 kB' 'AnonPages: 183872 kB' 'Shmem: 8003428 kB' 'KernelStack: 9992 kB' 'PageTables: 3216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150712 kB' 'Slab: 509192 kB' 'SReclaimable: 150712 kB' 'SUnreclaim: 358480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.859 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.859 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # continue 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:39.860 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:39.860 10:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # continue 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.119 10:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.119 10:58:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.119 10:58:00 -- setup/common.sh@33 -- # echo 0 00:03:40.119 10:58:00 -- setup/common.sh@33 -- # return 0 00:03:40.119 10:58:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.119 10:58:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.119 10:58:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.119 10:58:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.119 10:58:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.119 node0=512 expecting 512 00:03:40.119 10:58:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.119 10:58:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.119 10:58:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.119 10:58:00 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.119 node1=512 expecting 512 00:03:40.119 10:58:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.119 00:03:40.119 real 0m4.256s 00:03:40.119 user 0m1.636s 00:03:40.119 sys 0m2.650s 00:03:40.119 10:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:40.119 10:58:00 -- common/autotest_common.sh@10 -- # set +x 00:03:40.119 ************************************ 00:03:40.119 END TEST per_node_1G_alloc 00:03:40.119 ************************************ 00:03:40.119 10:58:00 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.119 10:58:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.119 10:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.119 10:58:00 -- common/autotest_common.sh@10 -- # set +x 00:03:40.119 ************************************ 00:03:40.119 START TEST even_2G_alloc 00:03:40.119 ************************************ 00:03:40.120 10:58:00 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:40.120 10:58:00 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.120 10:58:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.120 10:58:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.120 10:58:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.120 10:58:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.120 10:58:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.120 10:58:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.120 10:58:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.120 10:58:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.120 10:58:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.120 10:58:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.120 10:58:00 -- setup/hugepages.sh@83 -- # : 512 00:03:40.120 10:58:00 -- setup/hugepages.sh@84 -- # : 1 00:03:40.120 10:58:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.120 10:58:00 -- setup/hugepages.sh@83 -- # : 0 00:03:40.120 10:58:00 -- setup/hugepages.sh@84 -- # : 0 00:03:40.120 10:58:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.120 10:58:00 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.120 10:58:00 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.120 10:58:00 -- setup/hugepages.sh@153 -- # setup output 00:03:40.120 10:58:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.120 10:58:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:42.024 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.024 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.024 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.024 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.283 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.664 10:58:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:43.664 10:58:04 -- setup/hugepages.sh@89 -- # local node 00:03:43.664 10:58:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.664 10:58:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.664 10:58:04 -- setup/hugepages.sh@92 -- # local surp 00:03:43.664 10:58:04 -- setup/hugepages.sh@93 -- # local resv 00:03:43.664 10:58:04 -- setup/hugepages.sh@94 -- # local anon 00:03:43.664 10:58:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.664 10:58:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.664 10:58:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.664 10:58:04 -- setup/common.sh@18 -- # local node= 00:03:43.664 10:58:04 -- setup/common.sh@19 -- # local var val 00:03:43.664 10:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.664 10:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.664 10:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.664 10:58:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.664 10:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.664 10:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55201436 kB' 'MemAvailable: 60233396 kB' 'Buffers: 2708 kB' 'Cached: 15846280 kB' 'SwapCached: 0 kB' 'Active: 12429580 kB' 'Inactive: 4039544 kB' 'Active(anon): 11247568 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622848 kB' 'Mapped: 200312 kB' 'Shmem: 10627432 kB' 'KReclaimable: 524056 kB' 'Slab: 1482944 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958888 kB' 'KernelStack: 22832 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12577252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220944 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.664 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.664 10:58:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.665 10:58:04 -- setup/common.sh@33 -- # echo 0 00:03:43.665 10:58:04 -- setup/common.sh@33 -- # return 0 00:03:43.665 10:58:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.665 10:58:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.665 10:58:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.665 10:58:04 -- setup/common.sh@18 -- # local node= 00:03:43.665 10:58:04 -- setup/common.sh@19 -- # local var val 00:03:43.665 10:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.665 10:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.665 10:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.665 10:58:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.665 10:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.665 10:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55208392 kB' 'MemAvailable: 60240352 kB' 'Buffers: 2708 kB' 'Cached: 15846292 kB' 'SwapCached: 0 kB' 'Active: 12422656 kB' 'Inactive: 4039544 kB' 'Active(anon): 11240644 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616412 kB' 'Mapped: 199692 kB' 'Shmem: 10627444 kB' 'KReclaimable: 524056 kB' 'Slab: 1482900 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958844 kB' 'KernelStack: 22800 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12571280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220892 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.665 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.665 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.666 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.666 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.667 10:58:04 -- setup/common.sh@33 -- # echo 0 00:03:43.667 10:58:04 -- setup/common.sh@33 -- # return 0 00:03:43.667 10:58:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.667 10:58:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.667 10:58:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.667 10:58:04 -- setup/common.sh@18 -- # local node= 00:03:43.667 10:58:04 -- setup/common.sh@19 -- # local var val 00:03:43.667 10:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.667 10:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.667 10:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.667 10:58:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.667 10:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.667 10:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55209592 kB' 'MemAvailable: 60241552 kB' 'Buffers: 2708 kB' 'Cached: 15846304 kB' 'SwapCached: 0 kB' 'Active: 12423064 kB' 'Inactive: 4039544 kB' 'Active(anon): 11241052 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616892 kB' 'Mapped: 199292 kB' 'Shmem: 10627456 kB' 'KReclaimable: 524056 kB' 'Slab: 1482900 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958844 kB' 'KernelStack: 22832 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12571668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220876 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.667 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.667 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.668 10:58:04 -- setup/common.sh@33 -- # echo 0 00:03:43.668 10:58:04 -- setup/common.sh@33 -- # return 0 00:03:43.668 10:58:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.668 10:58:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.668 nr_hugepages=1024 00:03:43.668 10:58:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.668 resv_hugepages=0 00:03:43.668 10:58:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.668 surplus_hugepages=0 00:03:43.668 10:58:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.668 anon_hugepages=0 00:03:43.668 10:58:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.668 10:58:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.668 10:58:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.668 10:58:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.668 10:58:04 -- setup/common.sh@18 -- # local node= 00:03:43.668 10:58:04 -- setup/common.sh@19 -- # local var val 00:03:43.668 10:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.668 10:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.668 10:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.668 10:58:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.668 10:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.668 10:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55209268 kB' 'MemAvailable: 60241228 kB' 'Buffers: 2708 kB' 'Cached: 15846316 kB' 'SwapCached: 0 kB' 'Active: 12422908 kB' 'Inactive: 4039544 kB' 'Active(anon): 11240896 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616700 kB' 'Mapped: 199292 kB' 'Shmem: 10627468 kB' 'KReclaimable: 524056 kB' 'Slab: 1482900 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 958844 kB' 'KernelStack: 22816 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12571680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220876 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.668 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.668 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.669 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.669 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.669 10:58:04 -- setup/common.sh@33 -- # echo 1024 00:03:43.669 10:58:04 -- setup/common.sh@33 -- # return 0 00:03:43.669 10:58:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.669 10:58:04 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.670 10:58:04 -- setup/hugepages.sh@27 -- # local node 00:03:43.670 10:58:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.670 10:58:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.670 10:58:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.670 10:58:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.670 10:58:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.670 10:58:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.670 10:58:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.670 10:58:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.670 10:58:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.670 10:58:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.670 10:58:04 -- setup/common.sh@18 -- # local node=0 00:03:43.670 10:58:04 -- setup/common.sh@19 -- # local var val 00:03:43.670 10:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.670 10:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.670 10:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.670 10:58:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.670 10:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.670 10:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.670 10:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 25146264 kB' 'MemUsed: 7478040 kB' 'SwapCached: 0 kB' 'Active: 3966996 kB' 'Inactive: 379308 kB' 'Active(anon): 3054000 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916268 kB' 'Mapped: 83868 kB' 'AnonPages: 433196 kB' 'Shmem: 2623964 kB' 'KernelStack: 12792 kB' 'PageTables: 5388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 972788 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 599444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.670 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.670 10:58:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@33 -- # echo 0 00:03:43.671 10:58:04 -- setup/common.sh@33 -- # return 0 00:03:43.671 10:58:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.671 10:58:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.671 10:58:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.671 10:58:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.671 10:58:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.671 10:58:04 -- setup/common.sh@18 -- # local node=1 00:03:43.671 10:58:04 -- setup/common.sh@19 -- # local var val 00:03:43.671 10:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.671 10:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.671 10:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.671 10:58:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.671 10:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.671 10:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44164364 kB' 'MemFree: 30063004 kB' 'MemUsed: 14101360 kB' 'SwapCached: 0 kB' 'Active: 8456476 kB' 'Inactive: 3660236 kB' 'Active(anon): 8187460 kB' 'Inactive(anon): 0 kB' 'Active(file): 269016 kB' 'Inactive(file): 3660236 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11932756 kB' 'Mapped: 115424 kB' 'AnonPages: 184112 kB' 'Shmem: 8003504 kB' 'KernelStack: 10040 kB' 'PageTables: 3336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150712 kB' 'Slab: 510112 kB' 'SReclaimable: 150712 kB' 'SUnreclaim: 359400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.671 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.671 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # continue 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.672 10:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.672 10:58:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.672 10:58:04 -- setup/common.sh@33 -- # echo 0 00:03:43.672 10:58:04 -- setup/common.sh@33 -- # return 0 00:03:43.672 10:58:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.672 10:58:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.672 10:58:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.672 10:58:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.672 10:58:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.672 node0=512 expecting 512 00:03:43.672 10:58:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.672 10:58:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.672 10:58:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.672 10:58:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:43.672 node1=512 expecting 512 00:03:43.672 10:58:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.672 00:03:43.672 real 0m3.750s 00:03:43.672 user 0m1.254s 00:03:43.672 sys 0m2.404s 00:03:43.672 10:58:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.672 10:58:04 -- common/autotest_common.sh@10 -- # set +x 00:03:43.672 ************************************ 00:03:43.672 END TEST even_2G_alloc 00:03:43.672 ************************************ 00:03:43.932 10:58:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:43.932 10:58:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.932 10:58:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.932 10:58:04 -- common/autotest_common.sh@10 -- # set +x 00:03:43.932 ************************************ 00:03:43.932 START TEST odd_alloc 00:03:43.932 ************************************ 00:03:43.932 10:58:04 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:43.932 10:58:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:43.932 10:58:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:43.932 10:58:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:43.932 10:58:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.932 10:58:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.932 10:58:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.932 10:58:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:43.932 10:58:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.932 10:58:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.932 10:58:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.932 10:58:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.932 10:58:04 -- setup/hugepages.sh@83 -- # : 513 00:03:43.932 10:58:04 -- setup/hugepages.sh@84 -- # : 1 00:03:43.932 10:58:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:43.932 10:58:04 -- setup/hugepages.sh@83 -- # : 0 00:03:43.932 10:58:04 -- setup/hugepages.sh@84 -- # : 0 00:03:43.932 10:58:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.932 10:58:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:43.932 10:58:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:43.932 10:58:04 -- setup/hugepages.sh@160 -- # setup output 00:03:43.932 10:58:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.932 10:58:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:46.468 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.468 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.851 10:58:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:47.851 10:58:08 -- setup/hugepages.sh@89 -- # local node 00:03:47.851 10:58:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.851 10:58:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.851 10:58:08 -- setup/hugepages.sh@92 -- # local surp 00:03:47.851 10:58:08 -- setup/hugepages.sh@93 -- # local resv 00:03:47.851 10:58:08 -- setup/hugepages.sh@94 -- # local anon 00:03:47.851 10:58:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.851 10:58:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.851 10:58:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.851 10:58:08 -- setup/common.sh@18 -- # local node= 00:03:47.851 10:58:08 -- setup/common.sh@19 -- # local var val 00:03:47.851 10:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.851 10:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.851 10:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.851 10:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.851 10:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.851 10:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55226752 kB' 'MemAvailable: 60258712 kB' 'Buffers: 2708 kB' 'Cached: 15846436 kB' 'SwapCached: 0 kB' 'Active: 12424724 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242712 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618372 kB' 'Mapped: 199488 kB' 'Shmem: 10627588 kB' 'KReclaimable: 524056 kB' 'Slab: 1483196 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 959140 kB' 'KernelStack: 22848 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45733336 kB' 'Committed_AS: 12572304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220956 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.851 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.851 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.852 10:58:08 -- setup/common.sh@33 -- # echo 0 00:03:47.852 10:58:08 -- setup/common.sh@33 -- # return 0 00:03:47.852 10:58:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.852 10:58:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.852 10:58:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.852 10:58:08 -- setup/common.sh@18 -- # local node= 00:03:47.852 10:58:08 -- setup/common.sh@19 -- # local var val 00:03:47.852 10:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.852 10:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.852 10:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.852 10:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.852 10:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.852 10:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55228884 kB' 'MemAvailable: 60260844 kB' 'Buffers: 2708 kB' 'Cached: 15846440 kB' 'SwapCached: 0 kB' 'Active: 12424460 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242448 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618076 kB' 'Mapped: 199328 kB' 'Shmem: 10627592 kB' 'KReclaimable: 524056 kB' 'Slab: 1483144 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 959088 kB' 'KernelStack: 22832 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45733336 kB' 'Committed_AS: 12572316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.852 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.852 10:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.853 10:58:08 -- setup/common.sh@33 -- # echo 0 00:03:47.853 10:58:08 -- setup/common.sh@33 -- # return 0 00:03:47.853 10:58:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.853 10:58:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.853 10:58:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.853 10:58:08 -- setup/common.sh@18 -- # local node= 00:03:47.853 10:58:08 -- setup/common.sh@19 -- # local var val 00:03:47.853 10:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.853 10:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.853 10:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.853 10:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.853 10:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.853 10:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55227908 kB' 'MemAvailable: 60259868 kB' 'Buffers: 2708 kB' 'Cached: 15846452 kB' 'SwapCached: 0 kB' 'Active: 12424476 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242464 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618080 kB' 'Mapped: 199328 kB' 'Shmem: 10627604 kB' 'KReclaimable: 524056 kB' 'Slab: 1483144 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 959088 kB' 'KernelStack: 22832 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45733336 kB' 'Committed_AS: 12572332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.853 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.853 10:58:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.854 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.854 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.855 10:58:08 -- setup/common.sh@33 -- # echo 0 00:03:47.855 10:58:08 -- setup/common.sh@33 -- # return 0 00:03:47.855 10:58:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.855 10:58:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:47.855 nr_hugepages=1025 00:03:47.855 10:58:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.855 resv_hugepages=0 00:03:47.855 10:58:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.855 surplus_hugepages=0 00:03:47.855 10:58:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.855 anon_hugepages=0 00:03:47.855 10:58:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:47.855 10:58:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:47.855 10:58:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.855 10:58:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.855 10:58:08 -- setup/common.sh@18 -- # local node= 00:03:47.855 10:58:08 -- setup/common.sh@19 -- # local var val 00:03:47.855 10:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.855 10:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.855 10:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.855 10:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.855 10:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.855 10:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55227908 kB' 'MemAvailable: 60259868 kB' 'Buffers: 2708 kB' 'Cached: 15846464 kB' 'SwapCached: 0 kB' 'Active: 12424512 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242500 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618088 kB' 'Mapped: 199328 kB' 'Shmem: 10627616 kB' 'KReclaimable: 524056 kB' 'Slab: 1483144 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 959088 kB' 'KernelStack: 22832 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45733336 kB' 'Committed_AS: 12572344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220924 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.855 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.855 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.856 10:58:08 -- setup/common.sh@33 -- # echo 1025 00:03:47.856 10:58:08 -- setup/common.sh@33 -- # return 0 00:03:47.856 10:58:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:47.856 10:58:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.856 10:58:08 -- setup/hugepages.sh@27 -- # local node 00:03:47.856 10:58:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.856 10:58:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.856 10:58:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.856 10:58:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:47.856 10:58:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.856 10:58:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.856 10:58:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.856 10:58:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.856 10:58:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.856 10:58:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.856 10:58:08 -- setup/common.sh@18 -- # local node=0 00:03:47.856 10:58:08 -- setup/common.sh@19 -- # local var val 00:03:47.856 10:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.856 10:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.856 10:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.856 10:58:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.856 10:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.856 10:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 25135836 kB' 'MemUsed: 7488468 kB' 'SwapCached: 0 kB' 'Active: 3967072 kB' 'Inactive: 379308 kB' 'Active(anon): 3054076 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916332 kB' 'Mapped: 83868 kB' 'AnonPages: 433188 kB' 'Shmem: 2624028 kB' 'KernelStack: 12808 kB' 'PageTables: 5388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 972780 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 599436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.856 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.856 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@33 -- # echo 0 00:03:47.857 10:58:08 -- setup/common.sh@33 -- # return 0 00:03:47.857 10:58:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.857 10:58:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.857 10:58:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.857 10:58:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.857 10:58:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.857 10:58:08 -- setup/common.sh@18 -- # local node=1 00:03:47.857 10:58:08 -- setup/common.sh@19 -- # local var val 00:03:47.857 10:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.857 10:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.857 10:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.857 10:58:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.857 10:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.857 10:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44164364 kB' 'MemFree: 30091820 kB' 'MemUsed: 14072544 kB' 'SwapCached: 0 kB' 'Active: 8457452 kB' 'Inactive: 3660236 kB' 'Active(anon): 8188436 kB' 'Inactive(anon): 0 kB' 'Active(file): 269016 kB' 'Inactive(file): 3660236 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11932868 kB' 'Mapped: 115460 kB' 'AnonPages: 184892 kB' 'Shmem: 8003616 kB' 'KernelStack: 10024 kB' 'PageTables: 3340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150712 kB' 'Slab: 510364 kB' 'SReclaimable: 150712 kB' 'SUnreclaim: 359652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.857 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.857 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # continue 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.858 10:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.858 10:58:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.858 10:58:08 -- setup/common.sh@33 -- # echo 0 00:03:47.858 10:58:08 -- setup/common.sh@33 -- # return 0 00:03:47.858 10:58:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.858 10:58:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.858 10:58:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.858 10:58:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.858 10:58:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:47.858 node0=512 expecting 513 00:03:47.858 10:58:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.858 10:58:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.858 10:58:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.858 10:58:08 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:47.858 node1=513 expecting 512 00:03:47.858 10:58:08 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:47.858 00:03:47.858 real 0m3.987s 00:03:47.858 user 0m1.308s 00:03:47.858 sys 0m2.636s 00:03:47.858 10:58:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.858 10:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:47.858 ************************************ 00:03:47.858 END TEST odd_alloc 00:03:47.858 ************************************ 00:03:47.858 10:58:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:47.858 10:58:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.858 10:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.858 10:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:47.858 ************************************ 00:03:47.858 START TEST custom_alloc 00:03:47.858 ************************************ 00:03:47.858 10:58:08 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:47.858 10:58:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:47.858 10:58:08 -- setup/hugepages.sh@169 -- # local node 00:03:47.858 10:58:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:47.858 10:58:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:47.858 10:58:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:47.859 10:58:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:47.859 10:58:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:47.859 10:58:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:47.859 10:58:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.859 10:58:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.859 10:58:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.859 10:58:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.859 10:58:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.859 10:58:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.859 10:58:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.859 10:58:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:47.859 10:58:08 -- setup/hugepages.sh@83 -- # : 256 00:03:47.859 10:58:08 -- setup/hugepages.sh@84 -- # : 1 00:03:47.859 10:58:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:47.859 10:58:08 -- setup/hugepages.sh@83 -- # : 0 00:03:47.859 10:58:08 -- setup/hugepages.sh@84 -- # : 0 00:03:47.859 10:58:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:47.859 10:58:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:47.859 10:58:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.859 10:58:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.859 10:58:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.859 10:58:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.859 10:58:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.859 10:58:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.859 10:58:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.859 10:58:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.859 10:58:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.859 10:58:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.859 10:58:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:47.859 10:58:08 -- setup/hugepages.sh@78 -- # return 0 00:03:47.859 10:58:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:47.859 10:58:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:47.859 10:58:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:47.859 10:58:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:47.859 10:58:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:47.859 10:58:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:47.859 10:58:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.859 10:58:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.859 10:58:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.859 10:58:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.859 10:58:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.859 10:58:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.859 10:58:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:47.859 10:58:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.859 10:58:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:47.859 10:58:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.859 10:58:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:47.859 10:58:08 -- setup/hugepages.sh@78 -- # return 0 00:03:47.859 10:58:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:47.859 10:58:08 -- setup/hugepages.sh@187 -- # setup output 00:03:47.859 10:58:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.859 10:58:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:51.229 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.229 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:51.925 10:58:12 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:51.925 10:58:12 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:51.925 10:58:12 -- setup/hugepages.sh@89 -- # local node 00:03:51.925 10:58:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.925 10:58:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.925 10:58:12 -- setup/hugepages.sh@92 -- # local surp 00:03:51.925 10:58:12 -- setup/hugepages.sh@93 -- # local resv 00:03:51.925 10:58:12 -- setup/hugepages.sh@94 -- # local anon 00:03:51.925 10:58:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.925 10:58:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.925 10:58:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.925 10:58:12 -- setup/common.sh@18 -- # local node= 00:03:51.925 10:58:12 -- setup/common.sh@19 -- # local var val 00:03:51.925 10:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.925 10:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.925 10:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.925 10:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.925 10:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.925 10:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 54191480 kB' 'MemAvailable: 59223440 kB' 'Buffers: 2708 kB' 'Cached: 15846604 kB' 'SwapCached: 0 kB' 'Active: 12424952 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242940 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618552 kB' 'Mapped: 199388 kB' 'Shmem: 10627756 kB' 'KReclaimable: 524056 kB' 'Slab: 1483636 kB' 'SReclaimable: 524056 kB' 'SUnreclaim: 959580 kB' 'KernelStack: 22784 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45210072 kB' 'Committed_AS: 12573248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.925 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.925 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.926 10:58:12 -- setup/common.sh@33 -- # echo 0 00:03:51.926 10:58:12 -- setup/common.sh@33 -- # return 0 00:03:51.926 10:58:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.926 10:58:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.926 10:58:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.926 10:58:12 -- setup/common.sh@18 -- # local node= 00:03:51.926 10:58:12 -- setup/common.sh@19 -- # local var val 00:03:51.926 10:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.926 10:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.926 10:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.926 10:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.926 10:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.926 10:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 54190648 kB' 'MemAvailable: 59222600 kB' 'Buffers: 2708 kB' 'Cached: 15846608 kB' 'SwapCached: 0 kB' 'Active: 12424820 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242808 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618448 kB' 'Mapped: 199320 kB' 'Shmem: 10627760 kB' 'KReclaimable: 524048 kB' 'Slab: 1483728 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 959680 kB' 'KernelStack: 22784 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45210072 kB' 'Committed_AS: 12573260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220764 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.926 10:58:12 -- setup/common.sh@32 -- # continue 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.926 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.189 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.189 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.190 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.190 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.190 10:58:12 -- setup/common.sh@33 -- # echo 0 00:03:52.190 10:58:12 -- setup/common.sh@33 -- # return 0 00:03:52.190 10:58:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.190 10:58:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.190 10:58:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.190 10:58:12 -- setup/common.sh@18 -- # local node= 00:03:52.190 10:58:12 -- setup/common.sh@19 -- # local var val 00:03:52.190 10:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.190 10:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.190 10:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.190 10:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.190 10:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.190 10:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 54190152 kB' 'MemAvailable: 59222104 kB' 'Buffers: 2708 kB' 'Cached: 15846620 kB' 'SwapCached: 0 kB' 'Active: 12424840 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242828 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618448 kB' 'Mapped: 199320 kB' 'Shmem: 10627772 kB' 'KReclaimable: 524048 kB' 'Slab: 1483728 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 959680 kB' 'KernelStack: 22768 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45210072 kB' 'Committed_AS: 12573276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.191 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.191 10:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.192 10:58:12 -- setup/common.sh@33 -- # echo 0 00:03:52.192 10:58:12 -- setup/common.sh@33 -- # return 0 00:03:52.192 10:58:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.192 10:58:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:52.192 nr_hugepages=1536 00:03:52.192 10:58:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.192 resv_hugepages=0 00:03:52.192 10:58:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.192 surplus_hugepages=0 00:03:52.192 10:58:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.192 anon_hugepages=0 00:03:52.192 10:58:12 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.192 10:58:12 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:52.192 10:58:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.192 10:58:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.192 10:58:12 -- setup/common.sh@18 -- # local node= 00:03:52.192 10:58:12 -- setup/common.sh@19 -- # local var val 00:03:52.192 10:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.192 10:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.192 10:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.192 10:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.192 10:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.192 10:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 54190316 kB' 'MemAvailable: 59222268 kB' 'Buffers: 2708 kB' 'Cached: 15846632 kB' 'SwapCached: 0 kB' 'Active: 12424852 kB' 'Inactive: 4039544 kB' 'Active(anon): 11242840 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618444 kB' 'Mapped: 199320 kB' 'Shmem: 10627784 kB' 'KReclaimable: 524048 kB' 'Slab: 1483728 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 959680 kB' 'KernelStack: 22768 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45210072 kB' 'Committed_AS: 12573288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220780 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.192 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.192 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.193 10:58:12 -- setup/common.sh@33 -- # echo 1536 00:03:52.193 10:58:12 -- setup/common.sh@33 -- # return 0 00:03:52.193 10:58:12 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.193 10:58:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.193 10:58:12 -- setup/hugepages.sh@27 -- # local node 00:03:52.193 10:58:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.193 10:58:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.193 10:58:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.193 10:58:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.193 10:58:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.193 10:58:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.193 10:58:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.193 10:58:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.193 10:58:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.193 10:58:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.193 10:58:12 -- setup/common.sh@18 -- # local node=0 00:03:52.193 10:58:12 -- setup/common.sh@19 -- # local var val 00:03:52.193 10:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.193 10:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.193 10:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.193 10:58:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.193 10:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.193 10:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 25145416 kB' 'MemUsed: 7478888 kB' 'SwapCached: 0 kB' 'Active: 3966384 kB' 'Inactive: 379308 kB' 'Active(anon): 3053388 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916344 kB' 'Mapped: 83868 kB' 'AnonPages: 432516 kB' 'Shmem: 2624040 kB' 'KernelStack: 12712 kB' 'PageTables: 5340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 972688 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 599344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.193 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.193 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.194 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.194 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.194 10:58:12 -- setup/common.sh@33 -- # echo 0 00:03:52.194 10:58:12 -- setup/common.sh@33 -- # return 0 00:03:52.194 10:58:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.194 10:58:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.194 10:58:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.194 10:58:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.194 10:58:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.194 10:58:12 -- setup/common.sh@18 -- # local node=1 00:03:52.194 10:58:12 -- setup/common.sh@19 -- # local var val 00:03:52.194 10:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.195 10:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.195 10:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.195 10:58:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.195 10:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.195 10:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44164364 kB' 'MemFree: 29044900 kB' 'MemUsed: 15119464 kB' 'SwapCached: 0 kB' 'Active: 8458508 kB' 'Inactive: 3660236 kB' 'Active(anon): 8189492 kB' 'Inactive(anon): 0 kB' 'Active(file): 269016 kB' 'Inactive(file): 3660236 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11933024 kB' 'Mapped: 115452 kB' 'AnonPages: 185928 kB' 'Shmem: 8003772 kB' 'KernelStack: 10056 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150704 kB' 'Slab: 511040 kB' 'SReclaimable: 150704 kB' 'SUnreclaim: 360336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # continue 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.195 10:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.195 10:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.195 10:58:12 -- setup/common.sh@33 -- # echo 0 00:03:52.195 10:58:12 -- setup/common.sh@33 -- # return 0 00:03:52.195 10:58:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.195 10:58:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.195 10:58:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.196 10:58:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.196 10:58:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.196 node0=512 expecting 512 00:03:52.196 10:58:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.196 10:58:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.196 10:58:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.196 10:58:12 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:52.196 node1=1024 expecting 1024 00:03:52.196 10:58:12 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.196 00:03:52.196 real 0m4.354s 00:03:52.196 user 0m1.603s 00:03:52.196 sys 0m2.755s 00:03:52.196 10:58:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.196 10:58:12 -- common/autotest_common.sh@10 -- # set +x 00:03:52.196 ************************************ 00:03:52.196 END TEST custom_alloc 00:03:52.196 ************************************ 00:03:52.196 10:58:12 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.196 10:58:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.196 10:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.196 10:58:12 -- common/autotest_common.sh@10 -- # set +x 00:03:52.196 ************************************ 00:03:52.196 START TEST no_shrink_alloc 00:03:52.196 ************************************ 00:03:52.196 10:58:12 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:52.196 10:58:12 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:52.196 10:58:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.196 10:58:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.196 10:58:12 -- setup/hugepages.sh@51 -- # shift 00:03:52.196 10:58:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.196 10:58:12 -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.196 10:58:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.196 10:58:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.196 10:58:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.196 10:58:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.196 10:58:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.196 10:58:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.196 10:58:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.196 10:58:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.196 10:58:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.196 10:58:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.196 10:58:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.196 10:58:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.196 10:58:12 -- setup/hugepages.sh@73 -- # return 0 00:03:52.196 10:58:12 -- setup/hugepages.sh@198 -- # setup output 00:03:52.196 10:58:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.196 10:58:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:54.730 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.730 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.122 10:58:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:56.122 10:58:16 -- setup/hugepages.sh@89 -- # local node 00:03:56.122 10:58:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.122 10:58:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.122 10:58:16 -- setup/hugepages.sh@92 -- # local surp 00:03:56.122 10:58:16 -- setup/hugepages.sh@93 -- # local resv 00:03:56.122 10:58:16 -- setup/hugepages.sh@94 -- # local anon 00:03:56.122 10:58:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.122 10:58:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.122 10:58:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.122 10:58:16 -- setup/common.sh@18 -- # local node= 00:03:56.122 10:58:16 -- setup/common.sh@19 -- # local var val 00:03:56.122 10:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.122 10:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.122 10:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.122 10:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.122 10:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.122 10:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.122 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.122 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55216540 kB' 'MemAvailable: 60248492 kB' 'Buffers: 2708 kB' 'Cached: 15846748 kB' 'SwapCached: 0 kB' 'Active: 12427308 kB' 'Inactive: 4039544 kB' 'Active(anon): 11245296 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620768 kB' 'Mapped: 199424 kB' 'Shmem: 10627900 kB' 'KReclaimable: 524048 kB' 'Slab: 1482528 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 958480 kB' 'KernelStack: 22960 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12578648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220956 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.123 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.123 10:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.123 10:58:16 -- setup/common.sh@33 -- # echo 0 00:03:56.123 10:58:16 -- setup/common.sh@33 -- # return 0 00:03:56.123 10:58:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:56.124 10:58:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.124 10:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.124 10:58:16 -- setup/common.sh@18 -- # local node= 00:03:56.124 10:58:16 -- setup/common.sh@19 -- # local var val 00:03:56.124 10:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.124 10:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.124 10:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.124 10:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.124 10:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.124 10:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55226952 kB' 'MemAvailable: 60258904 kB' 'Buffers: 2708 kB' 'Cached: 15846752 kB' 'SwapCached: 0 kB' 'Active: 12427596 kB' 'Inactive: 4039544 kB' 'Active(anon): 11245584 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620576 kB' 'Mapped: 199496 kB' 'Shmem: 10627904 kB' 'KReclaimable: 524048 kB' 'Slab: 1482340 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 958292 kB' 'KernelStack: 23024 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12610384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220844 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.124 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.124 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.125 10:58:16 -- setup/common.sh@33 -- # echo 0 00:03:56.125 10:58:16 -- setup/common.sh@33 -- # return 0 00:03:56.125 10:58:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:56.125 10:58:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.125 10:58:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.125 10:58:16 -- setup/common.sh@18 -- # local node= 00:03:56.125 10:58:16 -- setup/common.sh@19 -- # local var val 00:03:56.125 10:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.125 10:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.125 10:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.125 10:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.125 10:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.125 10:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55221484 kB' 'MemAvailable: 60253436 kB' 'Buffers: 2708 kB' 'Cached: 15846764 kB' 'SwapCached: 0 kB' 'Active: 12427916 kB' 'Inactive: 4039544 kB' 'Active(anon): 11245904 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621320 kB' 'Mapped: 199416 kB' 'Shmem: 10627916 kB' 'KReclaimable: 524048 kB' 'Slab: 1482068 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 958020 kB' 'KernelStack: 23376 kB' 'PageTables: 10160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12578308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221036 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.125 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.125 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.126 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.126 10:58:16 -- setup/common.sh@33 -- # echo 0 00:03:56.126 10:58:16 -- setup/common.sh@33 -- # return 0 00:03:56.126 10:58:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:56.126 10:58:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.126 nr_hugepages=1024 00:03:56.126 10:58:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.126 resv_hugepages=0 00:03:56.126 10:58:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.126 surplus_hugepages=0 00:03:56.126 10:58:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.126 anon_hugepages=0 00:03:56.126 10:58:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.126 10:58:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.126 10:58:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.126 10:58:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.126 10:58:16 -- setup/common.sh@18 -- # local node= 00:03:56.126 10:58:16 -- setup/common.sh@19 -- # local var val 00:03:56.126 10:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.126 10:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.126 10:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.126 10:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.126 10:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.126 10:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.126 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55218940 kB' 'MemAvailable: 60250892 kB' 'Buffers: 2708 kB' 'Cached: 15846780 kB' 'SwapCached: 0 kB' 'Active: 12428728 kB' 'Inactive: 4039544 kB' 'Active(anon): 11246716 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622084 kB' 'Mapped: 199416 kB' 'Shmem: 10627932 kB' 'KReclaimable: 524048 kB' 'Slab: 1482068 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 958020 kB' 'KernelStack: 23584 kB' 'PageTables: 11092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12578328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220988 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.127 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.127 10:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.128 10:58:16 -- setup/common.sh@33 -- # echo 1024 00:03:56.128 10:58:16 -- setup/common.sh@33 -- # return 0 00:03:56.128 10:58:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.128 10:58:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.128 10:58:16 -- setup/hugepages.sh@27 -- # local node 00:03:56.128 10:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.128 10:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.128 10:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.128 10:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.128 10:58:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.128 10:58:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.128 10:58:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.128 10:58:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.128 10:58:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.128 10:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.128 10:58:16 -- setup/common.sh@18 -- # local node=0 00:03:56.128 10:58:16 -- setup/common.sh@19 -- # local var val 00:03:56.128 10:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.128 10:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.128 10:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.128 10:58:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.128 10:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.128 10:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 24104692 kB' 'MemUsed: 8519612 kB' 'SwapCached: 0 kB' 'Active: 3969144 kB' 'Inactive: 379308 kB' 'Active(anon): 3056148 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916356 kB' 'Mapped: 83868 kB' 'AnonPages: 435276 kB' 'Shmem: 2624052 kB' 'KernelStack: 13416 kB' 'PageTables: 6900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 971648 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 598304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.128 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.128 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.388 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.388 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # continue 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.389 10:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.389 10:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.389 10:58:16 -- setup/common.sh@33 -- # echo 0 00:03:56.389 10:58:16 -- setup/common.sh@33 -- # return 0 00:03:56.389 10:58:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.389 10:58:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.389 10:58:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.389 10:58:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.389 10:58:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.389 node0=1024 expecting 1024 00:03:56.389 10:58:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.389 10:58:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:56.389 10:58:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:56.389 10:58:16 -- setup/hugepages.sh@202 -- # setup output 00:03:56.389 10:58:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.389 10:58:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:58.923 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.923 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.302 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:00.302 10:58:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:00.302 10:58:20 -- setup/hugepages.sh@89 -- # local node 00:04:00.302 10:58:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.302 10:58:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.302 10:58:20 -- setup/hugepages.sh@92 -- # local surp 00:04:00.302 10:58:20 -- setup/hugepages.sh@93 -- # local resv 00:04:00.302 10:58:20 -- setup/hugepages.sh@94 -- # local anon 00:04:00.302 10:58:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.302 10:58:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.302 10:58:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.302 10:58:20 -- setup/common.sh@18 -- # local node= 00:04:00.302 10:58:20 -- setup/common.sh@19 -- # local var val 00:04:00.302 10:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.302 10:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.302 10:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.302 10:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.302 10:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.302 10:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55212624 kB' 'MemAvailable: 60244576 kB' 'Buffers: 2708 kB' 'Cached: 15846896 kB' 'SwapCached: 0 kB' 'Active: 12434192 kB' 'Inactive: 4039544 kB' 'Active(anon): 11252180 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627752 kB' 'Mapped: 200400 kB' 'Shmem: 10628048 kB' 'KReclaimable: 524048 kB' 'Slab: 1482320 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 958272 kB' 'KernelStack: 22992 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12585600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220992 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.302 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.302 10:58:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.303 10:58:20 -- setup/common.sh@33 -- # echo 0 00:04:00.303 10:58:20 -- setup/common.sh@33 -- # return 0 00:04:00.303 10:58:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:00.303 10:58:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.303 10:58:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.303 10:58:20 -- setup/common.sh@18 -- # local node= 00:04:00.303 10:58:20 -- setup/common.sh@19 -- # local var val 00:04:00.303 10:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.303 10:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.303 10:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.303 10:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.303 10:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.303 10:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.303 10:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55217160 kB' 'MemAvailable: 60249112 kB' 'Buffers: 2708 kB' 'Cached: 15846900 kB' 'SwapCached: 0 kB' 'Active: 12428408 kB' 'Inactive: 4039544 kB' 'Active(anon): 11246396 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621968 kB' 'Mapped: 199864 kB' 'Shmem: 10628052 kB' 'KReclaimable: 524048 kB' 'Slab: 1482268 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 958220 kB' 'KernelStack: 23056 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12579492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221068 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.303 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.303 10:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.304 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.304 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 10:58:20 -- setup/common.sh@33 -- # echo 0 00:04:00.566 10:58:20 -- setup/common.sh@33 -- # return 0 00:04:00.566 10:58:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:00.566 10:58:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.566 10:58:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.566 10:58:20 -- setup/common.sh@18 -- # local node= 00:04:00.566 10:58:20 -- setup/common.sh@19 -- # local var val 00:04:00.566 10:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.566 10:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.566 10:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.566 10:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.566 10:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.566 10:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55215568 kB' 'MemAvailable: 60247520 kB' 'Buffers: 2708 kB' 'Cached: 15846916 kB' 'SwapCached: 0 kB' 'Active: 12428140 kB' 'Inactive: 4039544 kB' 'Active(anon): 11246128 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621784 kB' 'Mapped: 199464 kB' 'Shmem: 10628068 kB' 'KReclaimable: 524048 kB' 'Slab: 1481904 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 957856 kB' 'KernelStack: 23040 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12579516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221052 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 10:58:20 -- setup/common.sh@33 -- # echo 0 00:04:00.568 10:58:20 -- setup/common.sh@33 -- # return 0 00:04:00.568 10:58:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:00.568 10:58:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.568 nr_hugepages=1024 00:04:00.568 10:58:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.568 resv_hugepages=0 00:04:00.568 10:58:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.568 surplus_hugepages=0 00:04:00.568 10:58:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.568 anon_hugepages=0 00:04:00.568 10:58:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.568 10:58:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.568 10:58:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.568 10:58:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.568 10:58:20 -- setup/common.sh@18 -- # local node= 00:04:00.568 10:58:20 -- setup/common.sh@19 -- # local var val 00:04:00.568 10:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.568 10:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.568 10:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.568 10:58:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.568 10:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.568 10:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 76788668 kB' 'MemFree: 55220216 kB' 'MemAvailable: 60252168 kB' 'Buffers: 2708 kB' 'Cached: 15846928 kB' 'SwapCached: 0 kB' 'Active: 12428256 kB' 'Inactive: 4039544 kB' 'Active(anon): 11246244 kB' 'Inactive(anon): 0 kB' 'Active(file): 1182012 kB' 'Inactive(file): 4039544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621772 kB' 'Mapped: 199464 kB' 'Shmem: 10628080 kB' 'KReclaimable: 524048 kB' 'Slab: 1481904 kB' 'SReclaimable: 524048 kB' 'SUnreclaim: 957856 kB' 'KernelStack: 22848 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 45734360 kB' 'Committed_AS: 12577904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221068 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2596320 kB' 'DirectMap2M: 27488256 kB' 'DirectMap1G: 56623104 kB' 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 10:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 10:58:20 -- setup/common.sh@33 -- # echo 1024 00:04:00.569 10:58:20 -- setup/common.sh@33 -- # return 0 00:04:00.569 10:58:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.569 10:58:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.569 10:58:20 -- setup/hugepages.sh@27 -- # local node 00:04:00.569 10:58:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.569 10:58:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.569 10:58:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.569 10:58:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.569 10:58:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.569 10:58:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.569 10:58:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.569 10:58:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.569 10:58:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.569 10:58:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.569 10:58:20 -- setup/common.sh@18 -- # local node=0 00:04:00.569 10:58:20 -- setup/common.sh@19 -- # local var val 00:04:00.569 10:58:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.569 10:58:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.569 10:58:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.569 10:58:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.569 10:58:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.569 10:58:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32624304 kB' 'MemFree: 24115640 kB' 'MemUsed: 8508664 kB' 'SwapCached: 0 kB' 'Active: 3968340 kB' 'Inactive: 379308 kB' 'Active(anon): 3055344 kB' 'Inactive(anon): 0 kB' 'Active(file): 912996 kB' 'Inactive(file): 379308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3916368 kB' 'Mapped: 83868 kB' 'AnonPages: 434632 kB' 'Shmem: 2624064 kB' 'KernelStack: 13080 kB' 'PageTables: 5944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 373344 kB' 'Slab: 971736 kB' 'SReclaimable: 373344 kB' 'SUnreclaim: 598392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.569 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # continue 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 10:58:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 10:58:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 10:58:20 -- setup/common.sh@33 -- # echo 0 00:04:00.570 10:58:20 -- setup/common.sh@33 -- # return 0 00:04:00.570 10:58:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.570 10:58:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.570 10:58:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.570 10:58:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.570 10:58:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.570 node0=1024 expecting 1024 00:04:00.570 10:58:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.570 00:04:00.570 real 0m8.305s 00:04:00.570 user 0m3.035s 00:04:00.570 sys 0m5.189s 00:04:00.570 10:58:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.570 10:58:20 -- common/autotest_common.sh@10 -- # set +x 00:04:00.570 ************************************ 00:04:00.570 END TEST no_shrink_alloc 00:04:00.570 ************************************ 00:04:00.570 10:58:21 -- setup/hugepages.sh@217 -- # clear_hp 00:04:00.570 10:58:21 -- setup/hugepages.sh@37 -- # local node hp 00:04:00.570 10:58:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.570 10:58:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.570 10:58:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:00.570 10:58:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.570 10:58:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:00.570 10:58:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.570 10:58:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.570 10:58:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:00.570 10:58:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.570 10:58:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:00.570 10:58:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:00.570 10:58:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:00.570 00:04:00.570 real 0m32.982s 00:04:00.570 user 0m10.819s 00:04:00.570 sys 0m18.705s 00:04:00.570 10:58:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.570 10:58:21 -- common/autotest_common.sh@10 -- # set +x 00:04:00.570 ************************************ 00:04:00.570 END TEST hugepages 00:04:00.570 ************************************ 00:04:00.570 10:58:21 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:00.570 10:58:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.570 10:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.570 10:58:21 -- common/autotest_common.sh@10 -- # set +x 00:04:00.570 ************************************ 00:04:00.570 START TEST driver 00:04:00.570 ************************************ 00:04:00.570 10:58:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:00.830 * Looking for test storage... 00:04:00.830 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:00.830 10:58:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:00.830 10:58:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:00.830 10:58:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:00.830 10:58:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:00.830 10:58:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:00.830 10:58:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:00.830 10:58:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:00.830 10:58:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:00.830 10:58:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:00.830 10:58:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.830 10:58:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:00.830 10:58:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:00.830 10:58:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:00.830 10:58:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:00.830 10:58:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:00.830 10:58:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:00.830 10:58:21 -- scripts/common.sh@344 -- # : 1 00:04:00.830 10:58:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:00.830 10:58:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.830 10:58:21 -- scripts/common.sh@364 -- # decimal 1 00:04:00.830 10:58:21 -- scripts/common.sh@352 -- # local d=1 00:04:00.830 10:58:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.830 10:58:21 -- scripts/common.sh@354 -- # echo 1 00:04:00.830 10:58:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:00.830 10:58:21 -- scripts/common.sh@365 -- # decimal 2 00:04:00.830 10:58:21 -- scripts/common.sh@352 -- # local d=2 00:04:00.830 10:58:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.830 10:58:21 -- scripts/common.sh@354 -- # echo 2 00:04:00.830 10:58:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:00.830 10:58:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:00.830 10:58:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:00.830 10:58:21 -- scripts/common.sh@367 -- # return 0 00:04:00.830 10:58:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.830 10:58:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:00.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.830 --rc genhtml_branch_coverage=1 00:04:00.830 --rc genhtml_function_coverage=1 00:04:00.830 --rc genhtml_legend=1 00:04:00.830 --rc geninfo_all_blocks=1 00:04:00.830 --rc geninfo_unexecuted_blocks=1 00:04:00.830 00:04:00.830 ' 00:04:00.830 10:58:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:00.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.830 --rc genhtml_branch_coverage=1 00:04:00.830 --rc genhtml_function_coverage=1 00:04:00.830 --rc genhtml_legend=1 00:04:00.830 --rc geninfo_all_blocks=1 00:04:00.830 --rc geninfo_unexecuted_blocks=1 00:04:00.830 00:04:00.830 ' 00:04:00.830 10:58:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:00.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.830 --rc genhtml_branch_coverage=1 00:04:00.830 --rc genhtml_function_coverage=1 00:04:00.830 --rc genhtml_legend=1 00:04:00.830 --rc geninfo_all_blocks=1 00:04:00.830 --rc geninfo_unexecuted_blocks=1 00:04:00.830 00:04:00.830 ' 00:04:00.830 10:58:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:00.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.830 --rc genhtml_branch_coverage=1 00:04:00.830 --rc genhtml_function_coverage=1 00:04:00.830 --rc genhtml_legend=1 00:04:00.830 --rc geninfo_all_blocks=1 00:04:00.830 --rc geninfo_unexecuted_blocks=1 00:04:00.830 00:04:00.830 ' 00:04:00.830 10:58:21 -- setup/driver.sh@68 -- # setup reset 00:04:00.830 10:58:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.830 10:58:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.105 10:58:26 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:06.105 10:58:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.105 10:58:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.105 10:58:26 -- common/autotest_common.sh@10 -- # set +x 00:04:06.105 ************************************ 00:04:06.105 START TEST guess_driver 00:04:06.105 ************************************ 00:04:06.105 10:58:26 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:06.105 10:58:26 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:06.105 10:58:26 -- setup/driver.sh@47 -- # local fail=0 00:04:06.105 10:58:26 -- setup/driver.sh@49 -- # pick_driver 00:04:06.105 10:58:26 -- setup/driver.sh@36 -- # vfio 00:04:06.105 10:58:26 -- setup/driver.sh@21 -- # local iommu_grups 00:04:06.105 10:58:26 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:06.105 10:58:26 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:06.105 10:58:26 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:06.105 10:58:26 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:06.105 10:58:26 -- setup/driver.sh@29 -- # (( 181 > 0 )) 00:04:06.105 10:58:26 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:06.105 10:58:26 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:06.105 10:58:26 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:06.105 10:58:26 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:06.105 10:58:26 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:06.105 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.106 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.106 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.106 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.106 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:06.106 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:06.106 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:06.106 10:58:26 -- setup/driver.sh@30 -- # return 0 00:04:06.106 10:58:26 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:06.106 10:58:26 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:06.106 10:58:26 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:06.106 10:58:26 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:06.106 Looking for driver=vfio-pci 00:04:06.106 10:58:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.106 10:58:26 -- setup/driver.sh@45 -- # setup output config 00:04:06.106 10:58:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.106 10:58:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.398 10:58:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.398 10:58:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.398 10:58:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.688 10:58:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.688 10:58:32 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.688 10:58:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.065 10:58:34 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.065 10:58:34 -- setup/driver.sh@65 -- # setup reset 00:04:14.065 10:58:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.065 10:58:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.342 00:04:19.342 real 0m13.007s 00:04:19.342 user 0m3.213s 00:04:19.342 sys 0m5.743s 00:04:19.342 10:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.342 10:58:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.342 ************************************ 00:04:19.342 END TEST guess_driver 00:04:19.342 ************************************ 00:04:19.342 00:04:19.342 real 0m18.622s 00:04:19.342 user 0m4.924s 00:04:19.342 sys 0m8.914s 00:04:19.342 10:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.342 10:58:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.342 ************************************ 00:04:19.342 END TEST driver 00:04:19.342 ************************************ 00:04:19.342 10:58:39 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:19.342 10:58:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.342 10:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.342 10:58:39 -- common/autotest_common.sh@10 -- # set +x 00:04:19.342 ************************************ 00:04:19.342 START TEST devices 00:04:19.342 ************************************ 00:04:19.342 10:58:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:19.342 * Looking for test storage... 00:04:19.342 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:19.342 10:58:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:19.342 10:58:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:19.343 10:58:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:19.343 10:58:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:19.343 10:58:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:19.343 10:58:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:19.343 10:58:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:19.343 10:58:39 -- scripts/common.sh@335 -- # IFS=.-: 00:04:19.343 10:58:39 -- scripts/common.sh@335 -- # read -ra ver1 00:04:19.343 10:58:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.343 10:58:39 -- scripts/common.sh@336 -- # read -ra ver2 00:04:19.343 10:58:39 -- scripts/common.sh@337 -- # local 'op=<' 00:04:19.343 10:58:39 -- scripts/common.sh@339 -- # ver1_l=2 00:04:19.343 10:58:39 -- scripts/common.sh@340 -- # ver2_l=1 00:04:19.343 10:58:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:19.343 10:58:39 -- scripts/common.sh@343 -- # case "$op" in 00:04:19.343 10:58:39 -- scripts/common.sh@344 -- # : 1 00:04:19.343 10:58:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:19.343 10:58:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.343 10:58:39 -- scripts/common.sh@364 -- # decimal 1 00:04:19.343 10:58:39 -- scripts/common.sh@352 -- # local d=1 00:04:19.343 10:58:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.343 10:58:39 -- scripts/common.sh@354 -- # echo 1 00:04:19.343 10:58:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:19.343 10:58:39 -- scripts/common.sh@365 -- # decimal 2 00:04:19.343 10:58:39 -- scripts/common.sh@352 -- # local d=2 00:04:19.343 10:58:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.343 10:58:39 -- scripts/common.sh@354 -- # echo 2 00:04:19.343 10:58:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:19.343 10:58:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:19.343 10:58:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:19.343 10:58:39 -- scripts/common.sh@367 -- # return 0 00:04:19.343 10:58:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.343 10:58:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:19.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.343 --rc genhtml_branch_coverage=1 00:04:19.343 --rc genhtml_function_coverage=1 00:04:19.343 --rc genhtml_legend=1 00:04:19.343 --rc geninfo_all_blocks=1 00:04:19.343 --rc geninfo_unexecuted_blocks=1 00:04:19.343 00:04:19.343 ' 00:04:19.343 10:58:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:19.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.343 --rc genhtml_branch_coverage=1 00:04:19.343 --rc genhtml_function_coverage=1 00:04:19.343 --rc genhtml_legend=1 00:04:19.343 --rc geninfo_all_blocks=1 00:04:19.343 --rc geninfo_unexecuted_blocks=1 00:04:19.343 00:04:19.343 ' 00:04:19.343 10:58:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:19.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.343 --rc genhtml_branch_coverage=1 00:04:19.343 --rc genhtml_function_coverage=1 00:04:19.343 --rc genhtml_legend=1 00:04:19.343 --rc geninfo_all_blocks=1 00:04:19.343 --rc geninfo_unexecuted_blocks=1 00:04:19.343 00:04:19.343 ' 00:04:19.343 10:58:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:19.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.343 --rc genhtml_branch_coverage=1 00:04:19.343 --rc genhtml_function_coverage=1 00:04:19.343 --rc genhtml_legend=1 00:04:19.343 --rc geninfo_all_blocks=1 00:04:19.343 --rc geninfo_unexecuted_blocks=1 00:04:19.343 00:04:19.343 ' 00:04:19.343 10:58:39 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:19.343 10:58:39 -- setup/devices.sh@192 -- # setup reset 00:04:19.343 10:58:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.343 10:58:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.617 10:58:44 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.617 10:58:44 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:24.617 10:58:44 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:24.617 10:58:44 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:24.617 10:58:44 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:24.617 10:58:44 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:24.617 10:58:44 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:24.617 10:58:44 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.617 10:58:44 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:24.617 10:58:44 -- setup/devices.sh@196 -- # blocks=() 00:04:24.617 10:58:44 -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.617 10:58:44 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.617 10:58:44 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.617 10:58:44 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.617 10:58:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.617 10:58:44 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.617 10:58:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.617 10:58:44 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:24.617 10:58:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:24.617 10:58:44 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.617 10:58:44 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:24.617 10:58:44 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:24.617 No valid GPT data, bailing 00:04:24.617 10:58:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.617 10:58:44 -- scripts/common.sh@393 -- # pt= 00:04:24.617 10:58:44 -- scripts/common.sh@394 -- # return 1 00:04:24.617 10:58:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:24.617 10:58:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:24.617 10:58:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:24.617 10:58:44 -- setup/common.sh@80 -- # echo 4000787030016 00:04:24.617 10:58:44 -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:24.617 10:58:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.617 10:58:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:24.617 10:58:44 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:24.617 10:58:44 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:24.617 10:58:44 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.617 10:58:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.617 10:58:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.617 10:58:44 -- common/autotest_common.sh@10 -- # set +x 00:04:24.617 ************************************ 00:04:24.617 START TEST nvme_mount 00:04:24.617 ************************************ 00:04:24.617 10:58:44 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:24.617 10:58:44 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:24.617 10:58:44 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:24.617 10:58:44 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.617 10:58:44 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.617 10:58:44 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:24.617 10:58:44 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.617 10:58:44 -- setup/common.sh@40 -- # local part_no=1 00:04:24.617 10:58:44 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.617 10:58:44 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.617 10:58:44 -- setup/common.sh@44 -- # parts=() 00:04:24.617 10:58:44 -- setup/common.sh@44 -- # local parts 00:04:24.617 10:58:44 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.617 10:58:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.617 10:58:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.617 10:58:44 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.617 10:58:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.617 10:58:44 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.617 10:58:44 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.617 10:58:44 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.185 Creating new GPT entries in memory. 00:04:25.185 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.185 other utilities. 00:04:25.185 10:58:45 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.185 10:58:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.185 10:58:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.185 10:58:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.185 10:58:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.123 Creating new GPT entries in memory. 00:04:26.123 The operation has completed successfully. 00:04:26.123 10:58:46 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.123 10:58:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.123 10:58:46 -- setup/common.sh@62 -- # wait 1419995 00:04:26.123 10:58:46 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.123 10:58:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:26.123 10:58:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.123 10:58:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.123 10:58:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.123 10:58:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.123 10:58:46 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.123 10:58:46 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:26.123 10:58:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.123 10:58:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.123 10:58:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.123 10:58:46 -- setup/devices.sh@53 -- # local found=0 00:04:26.123 10:58:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.123 10:58:46 -- setup/devices.sh@56 -- # : 00:04:26.123 10:58:46 -- setup/devices.sh@59 -- # local pci status 00:04:26.123 10:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.123 10:58:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:26.123 10:58:46 -- setup/devices.sh@47 -- # setup output config 00:04:26.123 10:58:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.123 10:58:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:29.412 10:58:49 -- setup/devices.sh@63 -- # found=1 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.412 10:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.412 10:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.348 10:58:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.348 10:58:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.348 10:58:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.348 10:58:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.348 10:58:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.348 10:58:50 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.348 10:58:50 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.348 10:58:50 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.348 10:58:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.348 10:58:50 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.348 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.348 10:58:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.348 10:58:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.607 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.607 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.607 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.607 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.607 10:58:51 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.607 10:58:51 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.607 10:58:51 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.607 10:58:51 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.607 10:58:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.607 10:58:51 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.607 10:58:51 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.607 10:58:51 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:30.607 10:58:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.608 10:58:51 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.608 10:58:51 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.608 10:58:51 -- setup/devices.sh@53 -- # local found=0 00:04:30.608 10:58:51 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.608 10:58:51 -- setup/devices.sh@56 -- # : 00:04:30.608 10:58:51 -- setup/devices.sh@59 -- # local pci status 00:04:30.608 10:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.608 10:58:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:30.608 10:58:51 -- setup/devices.sh@47 -- # setup output config 00:04:30.608 10:58:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.608 10:58:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.144 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:33.145 10:58:53 -- setup/devices.sh@63 -- # found=1 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.145 10:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:33.145 10:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.612 10:58:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.612 10:58:55 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.612 10:58:55 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.612 10:58:55 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.612 10:58:55 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.612 10:58:55 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.612 10:58:55 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:34.612 10:58:55 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:34.612 10:58:55 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.612 10:58:55 -- setup/devices.sh@50 -- # local mount_point= 00:04:34.612 10:58:55 -- setup/devices.sh@51 -- # local test_file= 00:04:34.612 10:58:55 -- setup/devices.sh@53 -- # local found=0 00:04:34.612 10:58:55 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.612 10:58:55 -- setup/devices.sh@59 -- # local pci status 00:04:34.612 10:58:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.612 10:58:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:34.612 10:58:55 -- setup/devices.sh@47 -- # setup output config 00:04:34.612 10:58:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.612 10:58:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.149 10:58:57 -- setup/devices.sh@63 -- # found=1 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.149 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.149 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.408 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.408 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.408 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.408 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.408 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.408 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.408 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.408 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.408 10:58:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.408 10:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.786 10:58:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.786 10:58:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.786 10:58:59 -- setup/devices.sh@68 -- # return 0 00:04:38.786 10:58:59 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.786 10:58:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.786 10:58:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.786 10:58:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.786 10:58:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.786 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.786 00:04:38.786 real 0m14.723s 00:04:38.786 user 0m4.561s 00:04:38.786 sys 0m7.872s 00:04:38.786 10:58:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.786 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:38.786 ************************************ 00:04:38.786 END TEST nvme_mount 00:04:38.786 ************************************ 00:04:38.786 10:58:59 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.786 10:58:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.786 10:58:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.786 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:38.786 ************************************ 00:04:38.786 START TEST dm_mount 00:04:38.786 ************************************ 00:04:38.786 10:58:59 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:38.786 10:58:59 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.786 10:58:59 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.786 10:58:59 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.786 10:58:59 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.786 10:58:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.786 10:58:59 -- setup/common.sh@40 -- # local part_no=2 00:04:38.786 10:58:59 -- setup/common.sh@41 -- # local size=1073741824 00:04:38.786 10:58:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.786 10:58:59 -- setup/common.sh@44 -- # parts=() 00:04:38.786 10:58:59 -- setup/common.sh@44 -- # local parts 00:04:38.786 10:58:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.786 10:58:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.786 10:58:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.786 10:58:59 -- setup/common.sh@46 -- # (( part++ )) 00:04:38.786 10:58:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.786 10:58:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.786 10:58:59 -- setup/common.sh@46 -- # (( part++ )) 00:04:38.786 10:58:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.786 10:58:59 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.786 10:58:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.786 10:58:59 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.724 Creating new GPT entries in memory. 00:04:39.724 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.724 other utilities. 00:04:39.724 10:59:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.724 10:59:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.724 10:59:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.724 10:59:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.724 10:59:00 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.104 Creating new GPT entries in memory. 00:04:41.104 The operation has completed successfully. 00:04:41.104 10:59:01 -- setup/common.sh@57 -- # (( part++ )) 00:04:41.104 10:59:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.104 10:59:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.104 10:59:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.104 10:59:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.041 The operation has completed successfully. 00:04:42.041 10:59:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:42.041 10:59:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.041 10:59:02 -- setup/common.sh@62 -- # wait 1425120 00:04:42.041 10:59:02 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.041 10:59:02 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.041 10:59:02 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.041 10:59:02 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.041 10:59:02 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.041 10:59:02 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.041 10:59:02 -- setup/devices.sh@161 -- # break 00:04:42.041 10:59:02 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.041 10:59:02 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.041 10:59:02 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.041 10:59:02 -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.041 10:59:02 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.041 10:59:02 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.041 10:59:02 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.041 10:59:02 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.041 10:59:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.041 10:59:02 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.041 10:59:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.041 10:59:02 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.041 10:59:02 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.041 10:59:02 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:42.041 10:59:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.041 10:59:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.041 10:59:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.041 10:59:02 -- setup/devices.sh@53 -- # local found=0 00:04:42.041 10:59:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.041 10:59:02 -- setup/devices.sh@56 -- # : 00:04:42.041 10:59:02 -- setup/devices.sh@59 -- # local pci status 00:04:42.041 10:59:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 10:59:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:42.041 10:59:02 -- setup/devices.sh@47 -- # setup output config 00:04:42.041 10:59:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.041 10:59:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:44.580 10:59:04 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.580 10:59:04 -- setup/devices.sh@63 -- # found=1 00:04:44.580 10:59:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.580 10:59:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.580 10:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.484 10:59:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.484 10:59:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.484 10:59:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:46.484 10:59:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.484 10:59:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.484 10:59:06 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:46.484 10:59:06 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.484 10:59:06 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:46.484 10:59:06 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.484 10:59:06 -- setup/devices.sh@50 -- # local mount_point= 00:04:46.484 10:59:06 -- setup/devices.sh@51 -- # local test_file= 00:04:46.484 10:59:06 -- setup/devices.sh@53 -- # local found=0 00:04:46.484 10:59:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.484 10:59:06 -- setup/devices.sh@59 -- # local pci status 00:04:46.484 10:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.484 10:59:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:46.484 10:59:06 -- setup/devices.sh@47 -- # setup output config 00:04:46.484 10:59:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.484 10:59:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:49.026 10:59:09 -- setup/devices.sh@63 -- # found=1 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.026 10:59:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.026 10:59:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.404 10:59:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.405 10:59:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.405 10:59:10 -- setup/devices.sh@68 -- # return 0 00:04:50.405 10:59:10 -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.405 10:59:10 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:50.405 10:59:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.405 10:59:10 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.405 10:59:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.405 10:59:10 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.405 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.405 10:59:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.405 10:59:10 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.405 00:04:50.405 real 0m11.462s 00:04:50.405 user 0m2.997s 00:04:50.405 sys 0m5.371s 00:04:50.405 10:59:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.405 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:50.405 ************************************ 00:04:50.405 END TEST dm_mount 00:04:50.405 ************************************ 00:04:50.405 10:59:10 -- setup/devices.sh@1 -- # cleanup 00:04:50.405 10:59:10 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.405 10:59:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.405 10:59:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.405 10:59:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.405 10:59:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.405 10:59:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.664 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.664 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.664 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.664 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.664 10:59:11 -- setup/devices.sh@12 -- # cleanup_dm 00:04:50.664 10:59:11 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:50.664 10:59:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.664 10:59:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.664 10:59:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.664 10:59:11 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.664 10:59:11 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:50.664 00:04:50.664 real 0m31.286s 00:04:50.664 user 0m9.322s 00:04:50.664 sys 0m16.434s 00:04:50.664 10:59:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.664 10:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:50.664 ************************************ 00:04:50.664 END TEST devices 00:04:50.664 ************************************ 00:04:50.664 00:04:50.664 real 1m51.686s 00:04:50.664 user 0m33.778s 00:04:50.664 sys 1m0.201s 00:04:50.664 10:59:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.664 10:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:50.664 ************************************ 00:04:50.664 END TEST setup.sh 00:04:50.664 ************************************ 00:04:50.664 10:59:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:53.957 Hugepages 00:04:53.957 node hugesize free / total 00:04:53.957 node0 1048576kB 0 / 0 00:04:53.957 node0 2048kB 2048 / 2048 00:04:53.957 node1 1048576kB 0 / 0 00:04:53.957 node1 2048kB 0 / 0 00:04:53.957 00:04:53.957 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.957 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:53.957 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:53.957 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:53.957 10:59:13 -- spdk/autotest.sh@128 -- # uname -s 00:04:53.957 10:59:13 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:53.957 10:59:13 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:53.957 10:59:13 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:56.493 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.493 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.784 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.161 10:59:21 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:02.101 10:59:22 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:02.101 10:59:22 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:02.101 10:59:22 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:02.101 10:59:22 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:02.101 10:59:22 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:02.101 10:59:22 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:02.101 10:59:22 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.101 10:59:22 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.101 10:59:22 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:02.360 10:59:22 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:02.360 10:59:22 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:02.360 10:59:22 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.896 Waiting for block devices as requested 00:05:04.896 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:04.896 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:04.896 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.155 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.155 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.155 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:05.155 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:05.414 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.414 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.414 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.414 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.673 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.673 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.673 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:05.932 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:05.932 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.932 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:07.309 10:59:27 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:07.309 10:59:27 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:07.568 10:59:27 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:07.568 10:59:27 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:07.568 10:59:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:07.568 10:59:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:07.568 10:59:27 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:07.568 10:59:27 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:07.568 10:59:27 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:07.568 10:59:27 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:07.568 10:59:27 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:07.568 10:59:27 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:07.568 10:59:27 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:07.568 10:59:27 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:07.568 10:59:27 -- common/autotest_common.sh@1552 -- # continue 00:05:07.568 10:59:27 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:07.568 10:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.568 10:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:07.568 10:59:27 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:07.568 10:59:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.568 10:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:07.568 10:59:27 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:10.105 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.105 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.105 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.105 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.364 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:13.656 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:15.034 10:59:35 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:15.034 10:59:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.034 10:59:35 -- common/autotest_common.sh@10 -- # set +x 00:05:15.034 10:59:35 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:15.034 10:59:35 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:15.034 10:59:35 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:15.034 10:59:35 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:15.034 10:59:35 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:15.034 10:59:35 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:15.034 10:59:35 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:15.034 10:59:35 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:15.034 10:59:35 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:15.034 10:59:35 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:15.034 10:59:35 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:15.294 10:59:35 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:15.294 10:59:35 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:15.294 10:59:35 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:15.294 10:59:35 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:15.294 10:59:35 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:15.294 10:59:35 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:15.294 10:59:35 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:15.294 10:59:35 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:15.294 10:59:35 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:15.294 10:59:35 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1436007 00:05:15.294 10:59:35 -- common/autotest_common.sh@1593 -- # waitforlisten 1436007 00:05:15.294 10:59:35 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.294 10:59:35 -- common/autotest_common.sh@829 -- # '[' -z 1436007 ']' 00:05:15.294 10:59:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.294 10:59:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.294 10:59:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.294 10:59:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.294 10:59:35 -- common/autotest_common.sh@10 -- # set +x 00:05:15.294 [2024-12-13 10:59:35.732514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.294 [2024-12-13 10:59:35.732558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436007 ] 00:05:15.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.294 [2024-12-13 10:59:35.785664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.294 [2024-12-13 10:59:35.854790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.294 [2024-12-13 10:59:35.854895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.262 10:59:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.262 10:59:36 -- common/autotest_common.sh@862 -- # return 0 00:05:16.262 10:59:36 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:16.262 10:59:36 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:16.262 10:59:36 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:19.550 nvme0n1 00:05:19.550 10:59:39 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:19.550 [2024-12-13 10:59:39.638352] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:19.550 request: 00:05:19.550 { 00:05:19.550 "nvme_ctrlr_name": "nvme0", 00:05:19.550 "password": "test", 00:05:19.550 "method": "bdev_nvme_opal_revert", 00:05:19.550 "req_id": 1 00:05:19.550 } 00:05:19.550 Got JSON-RPC error response 00:05:19.550 response: 00:05:19.550 { 00:05:19.550 "code": -32602, 00:05:19.550 "message": "Invalid parameters" 00:05:19.550 } 00:05:19.550 10:59:39 -- common/autotest_common.sh@1599 -- # true 00:05:19.550 10:59:39 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:19.550 10:59:39 -- common/autotest_common.sh@1603 -- # killprocess 1436007 00:05:19.550 10:59:39 -- common/autotest_common.sh@936 -- # '[' -z 1436007 ']' 00:05:19.550 10:59:39 -- common/autotest_common.sh@940 -- # kill -0 1436007 00:05:19.550 10:59:39 -- common/autotest_common.sh@941 -- # uname 00:05:19.550 10:59:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.550 10:59:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1436007 00:05:19.550 10:59:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.550 10:59:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.550 10:59:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1436007' 00:05:19.550 killing process with pid 1436007 00:05:19.550 10:59:39 -- common/autotest_common.sh@955 -- # kill 1436007 00:05:19.550 10:59:39 -- common/autotest_common.sh@960 -- # wait 1436007 00:05:23.741 10:59:43 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:23.741 10:59:43 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:23.741 10:59:43 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:23.741 10:59:43 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:23.741 10:59:43 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:23.741 10:59:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.741 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:23.741 10:59:43 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:23.741 10:59:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.741 10:59:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.741 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:23.741 ************************************ 00:05:23.741 START TEST env 00:05:23.741 ************************************ 00:05:23.741 10:59:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:23.741 * Looking for test storage... 00:05:23.741 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:23.741 10:59:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.742 10:59:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.742 10:59:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.742 10:59:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.742 10:59:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.742 10:59:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.742 10:59:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.742 10:59:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.742 10:59:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.742 10:59:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.742 10:59:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.742 10:59:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.742 10:59:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.742 10:59:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.742 10:59:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.742 10:59:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.742 10:59:43 -- scripts/common.sh@344 -- # : 1 00:05:23.742 10:59:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.742 10:59:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.742 10:59:43 -- scripts/common.sh@364 -- # decimal 1 00:05:23.742 10:59:43 -- scripts/common.sh@352 -- # local d=1 00:05:23.742 10:59:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.742 10:59:43 -- scripts/common.sh@354 -- # echo 1 00:05:23.742 10:59:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.742 10:59:43 -- scripts/common.sh@365 -- # decimal 2 00:05:23.742 10:59:43 -- scripts/common.sh@352 -- # local d=2 00:05:23.742 10:59:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.742 10:59:43 -- scripts/common.sh@354 -- # echo 2 00:05:23.742 10:59:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.742 10:59:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.742 10:59:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.742 10:59:43 -- scripts/common.sh@367 -- # return 0 00:05:23.742 10:59:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.742 10:59:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.742 --rc genhtml_branch_coverage=1 00:05:23.742 --rc genhtml_function_coverage=1 00:05:23.742 --rc genhtml_legend=1 00:05:23.742 --rc geninfo_all_blocks=1 00:05:23.742 --rc geninfo_unexecuted_blocks=1 00:05:23.742 00:05:23.742 ' 00:05:23.742 10:59:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.742 --rc genhtml_branch_coverage=1 00:05:23.742 --rc genhtml_function_coverage=1 00:05:23.742 --rc genhtml_legend=1 00:05:23.742 --rc geninfo_all_blocks=1 00:05:23.742 --rc geninfo_unexecuted_blocks=1 00:05:23.742 00:05:23.742 ' 00:05:23.742 10:59:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.742 --rc genhtml_branch_coverage=1 00:05:23.742 --rc genhtml_function_coverage=1 00:05:23.742 --rc genhtml_legend=1 00:05:23.742 --rc geninfo_all_blocks=1 00:05:23.742 --rc geninfo_unexecuted_blocks=1 00:05:23.742 00:05:23.742 ' 00:05:23.742 10:59:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.742 --rc genhtml_branch_coverage=1 00:05:23.742 --rc genhtml_function_coverage=1 00:05:23.742 --rc genhtml_legend=1 00:05:23.742 --rc geninfo_all_blocks=1 00:05:23.742 --rc geninfo_unexecuted_blocks=1 00:05:23.742 00:05:23.742 ' 00:05:23.742 10:59:43 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.742 10:59:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.742 10:59:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.742 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:23.742 ************************************ 00:05:23.742 START TEST env_memory 00:05:23.742 ************************************ 00:05:23.742 10:59:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.742 00:05:23.742 00:05:23.742 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.742 http://cunit.sourceforge.net/ 00:05:23.742 00:05:23.742 00:05:23.742 Suite: memory 00:05:23.742 Test: alloc and free memory map ...[2024-12-13 10:59:43.880424] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.742 passed 00:05:23.742 Test: mem map translation ...[2024-12-13 10:59:43.896855] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.742 [2024-12-13 10:59:43.896867] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.742 [2024-12-13 10:59:43.896899] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.742 [2024-12-13 10:59:43.896904] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.742 passed 00:05:23.742 Test: mem map registration ...[2024-12-13 10:59:43.930023] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:23.742 [2024-12-13 10:59:43.930035] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:23.742 passed 00:05:23.742 Test: mem map adjacent registrations ...passed 00:05:23.742 00:05:23.742 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.742 suites 1 1 n/a 0 0 00:05:23.742 tests 4 4 4 0 0 00:05:23.742 asserts 152 152 152 0 n/a 00:05:23.742 00:05:23.742 Elapsed time = 0.126 seconds 00:05:23.742 00:05:23.742 real 0m0.138s 00:05:23.742 user 0m0.131s 00:05:23.742 sys 0m0.007s 00:05:23.742 10:59:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.742 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:23.742 ************************************ 00:05:23.742 END TEST env_memory 00:05:23.742 ************************************ 00:05:23.742 10:59:44 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.742 10:59:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.742 10:59:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.742 10:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:23.742 ************************************ 00:05:23.742 START TEST env_vtophys 00:05:23.742 ************************************ 00:05:23.742 10:59:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.742 EAL: lib.eal log level changed from notice to debug 00:05:23.742 EAL: Detected lcore 0 as core 0 on socket 0 00:05:23.742 EAL: Detected lcore 1 as core 1 on socket 0 00:05:23.742 EAL: Detected lcore 2 as core 2 on socket 0 00:05:23.742 EAL: Detected lcore 3 as core 3 on socket 0 00:05:23.742 EAL: Detected lcore 4 as core 4 on socket 0 00:05:23.742 EAL: Detected lcore 5 as core 5 on socket 0 00:05:23.742 EAL: Detected lcore 6 as core 6 on socket 0 00:05:23.742 EAL: Detected lcore 7 as core 8 on socket 0 00:05:23.742 EAL: Detected lcore 8 as core 9 on socket 0 00:05:23.742 EAL: Detected lcore 9 as core 10 on socket 0 00:05:23.742 EAL: Detected lcore 10 as core 11 on socket 0 00:05:23.742 EAL: Detected lcore 11 as core 12 on socket 0 00:05:23.742 EAL: Detected lcore 12 as core 13 on socket 0 00:05:23.742 EAL: Detected lcore 13 as core 14 on socket 0 00:05:23.742 EAL: Detected lcore 14 as core 16 on socket 0 00:05:23.742 EAL: Detected lcore 15 as core 17 on socket 0 00:05:23.742 EAL: Detected lcore 16 as core 18 on socket 0 00:05:23.742 EAL: Detected lcore 17 as core 19 on socket 0 00:05:23.742 EAL: Detected lcore 18 as core 20 on socket 0 00:05:23.742 EAL: Detected lcore 19 as core 21 on socket 0 00:05:23.742 EAL: Detected lcore 20 as core 22 on socket 0 00:05:23.742 EAL: Detected lcore 21 as core 24 on socket 0 00:05:23.742 EAL: Detected lcore 22 as core 25 on socket 0 00:05:23.742 EAL: Detected lcore 23 as core 26 on socket 0 00:05:23.742 EAL: Detected lcore 24 as core 27 on socket 0 00:05:23.742 EAL: Detected lcore 25 as core 28 on socket 0 00:05:23.742 EAL: Detected lcore 26 as core 29 on socket 0 00:05:23.742 EAL: Detected lcore 27 as core 30 on socket 0 00:05:23.742 EAL: Detected lcore 28 as core 0 on socket 1 00:05:23.742 EAL: Detected lcore 29 as core 1 on socket 1 00:05:23.742 EAL: Detected lcore 30 as core 2 on socket 1 00:05:23.742 EAL: Detected lcore 31 as core 3 on socket 1 00:05:23.742 EAL: Detected lcore 32 as core 4 on socket 1 00:05:23.742 EAL: Detected lcore 33 as core 5 on socket 1 00:05:23.742 EAL: Detected lcore 34 as core 6 on socket 1 00:05:23.742 EAL: Detected lcore 35 as core 8 on socket 1 00:05:23.742 EAL: Detected lcore 36 as core 9 on socket 1 00:05:23.742 EAL: Detected lcore 37 as core 10 on socket 1 00:05:23.742 EAL: Detected lcore 38 as core 11 on socket 1 00:05:23.742 EAL: Detected lcore 39 as core 12 on socket 1 00:05:23.742 EAL: Detected lcore 40 as core 13 on socket 1 00:05:23.742 EAL: Detected lcore 41 as core 14 on socket 1 00:05:23.742 EAL: Detected lcore 42 as core 16 on socket 1 00:05:23.742 EAL: Detected lcore 43 as core 17 on socket 1 00:05:23.742 EAL: Detected lcore 44 as core 18 on socket 1 00:05:23.742 EAL: Detected lcore 45 as core 19 on socket 1 00:05:23.742 EAL: Detected lcore 46 as core 20 on socket 1 00:05:23.742 EAL: Detected lcore 47 as core 21 on socket 1 00:05:23.742 EAL: Detected lcore 48 as core 22 on socket 1 00:05:23.742 EAL: Detected lcore 49 as core 24 on socket 1 00:05:23.742 EAL: Detected lcore 50 as core 25 on socket 1 00:05:23.742 EAL: Detected lcore 51 as core 26 on socket 1 00:05:23.742 EAL: Detected lcore 52 as core 27 on socket 1 00:05:23.742 EAL: Detected lcore 53 as core 28 on socket 1 00:05:23.742 EAL: Detected lcore 54 as core 29 on socket 1 00:05:23.742 EAL: Detected lcore 55 as core 30 on socket 1 00:05:23.742 EAL: Detected lcore 56 as core 0 on socket 0 00:05:23.742 EAL: Detected lcore 57 as core 1 on socket 0 00:05:23.742 EAL: Detected lcore 58 as core 2 on socket 0 00:05:23.742 EAL: Detected lcore 59 as core 3 on socket 0 00:05:23.743 EAL: Detected lcore 60 as core 4 on socket 0 00:05:23.743 EAL: Detected lcore 61 as core 5 on socket 0 00:05:23.743 EAL: Detected lcore 62 as core 6 on socket 0 00:05:23.743 EAL: Detected lcore 63 as core 8 on socket 0 00:05:23.743 EAL: Detected lcore 64 as core 9 on socket 0 00:05:23.743 EAL: Detected lcore 65 as core 10 on socket 0 00:05:23.743 EAL: Detected lcore 66 as core 11 on socket 0 00:05:23.743 EAL: Detected lcore 67 as core 12 on socket 0 00:05:23.743 EAL: Detected lcore 68 as core 13 on socket 0 00:05:23.743 EAL: Detected lcore 69 as core 14 on socket 0 00:05:23.743 EAL: Detected lcore 70 as core 16 on socket 0 00:05:23.743 EAL: Detected lcore 71 as core 17 on socket 0 00:05:23.743 EAL: Detected lcore 72 as core 18 on socket 0 00:05:23.743 EAL: Detected lcore 73 as core 19 on socket 0 00:05:23.743 EAL: Detected lcore 74 as core 20 on socket 0 00:05:23.743 EAL: Detected lcore 75 as core 21 on socket 0 00:05:23.743 EAL: Detected lcore 76 as core 22 on socket 0 00:05:23.743 EAL: Detected lcore 77 as core 24 on socket 0 00:05:23.743 EAL: Detected lcore 78 as core 25 on socket 0 00:05:23.743 EAL: Detected lcore 79 as core 26 on socket 0 00:05:23.743 EAL: Detected lcore 80 as core 27 on socket 0 00:05:23.743 EAL: Detected lcore 81 as core 28 on socket 0 00:05:23.743 EAL: Detected lcore 82 as core 29 on socket 0 00:05:23.743 EAL: Detected lcore 83 as core 30 on socket 0 00:05:23.743 EAL: Detected lcore 84 as core 0 on socket 1 00:05:23.743 EAL: Detected lcore 85 as core 1 on socket 1 00:05:23.743 EAL: Detected lcore 86 as core 2 on socket 1 00:05:23.743 EAL: Detected lcore 87 as core 3 on socket 1 00:05:23.743 EAL: Detected lcore 88 as core 4 on socket 1 00:05:23.743 EAL: Detected lcore 89 as core 5 on socket 1 00:05:23.743 EAL: Detected lcore 90 as core 6 on socket 1 00:05:23.743 EAL: Detected lcore 91 as core 8 on socket 1 00:05:23.743 EAL: Detected lcore 92 as core 9 on socket 1 00:05:23.743 EAL: Detected lcore 93 as core 10 on socket 1 00:05:23.743 EAL: Detected lcore 94 as core 11 on socket 1 00:05:23.743 EAL: Detected lcore 95 as core 12 on socket 1 00:05:23.743 EAL: Detected lcore 96 as core 13 on socket 1 00:05:23.743 EAL: Detected lcore 97 as core 14 on socket 1 00:05:23.743 EAL: Detected lcore 98 as core 16 on socket 1 00:05:23.743 EAL: Detected lcore 99 as core 17 on socket 1 00:05:23.743 EAL: Detected lcore 100 as core 18 on socket 1 00:05:23.743 EAL: Detected lcore 101 as core 19 on socket 1 00:05:23.743 EAL: Detected lcore 102 as core 20 on socket 1 00:05:23.743 EAL: Detected lcore 103 as core 21 on socket 1 00:05:23.743 EAL: Detected lcore 104 as core 22 on socket 1 00:05:23.743 EAL: Detected lcore 105 as core 24 on socket 1 00:05:23.743 EAL: Detected lcore 106 as core 25 on socket 1 00:05:23.743 EAL: Detected lcore 107 as core 26 on socket 1 00:05:23.743 EAL: Detected lcore 108 as core 27 on socket 1 00:05:23.743 EAL: Detected lcore 109 as core 28 on socket 1 00:05:23.743 EAL: Detected lcore 110 as core 29 on socket 1 00:05:23.743 EAL: Detected lcore 111 as core 30 on socket 1 00:05:23.743 EAL: Maximum logical cores by configuration: 128 00:05:23.743 EAL: Detected CPU lcores: 112 00:05:23.743 EAL: Detected NUMA nodes: 2 00:05:23.743 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:23.743 EAL: Detected shared linkage of DPDK 00:05:23.743 EAL: No shared files mode enabled, IPC will be disabled 00:05:23.743 EAL: Bus pci wants IOVA as 'DC' 00:05:23.743 EAL: Buses did not request a specific IOVA mode. 00:05:23.743 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:23.743 EAL: Selected IOVA mode 'VA' 00:05:23.743 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.743 EAL: Probing VFIO support... 00:05:23.743 EAL: IOMMU type 1 (Type 1) is supported 00:05:23.743 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:23.743 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:23.743 EAL: VFIO support initialized 00:05:23.743 EAL: Ask a virtual area of 0x2e000 bytes 00:05:23.743 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:23.743 EAL: Setting up physically contiguous memory... 00:05:23.743 EAL: Setting maximum number of open files to 524288 00:05:23.743 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:23.743 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:23.743 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:23.743 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:23.743 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.743 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:23.743 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.743 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.743 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:23.743 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:23.743 EAL: Hugepages will be freed exactly as allocated. 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: TSC frequency is ~2700000 KHz 00:05:23.743 EAL: Main lcore 0 is ready (tid=7f5dd4846a00;cpuset=[0]) 00:05:23.743 EAL: Trying to obtain current memory policy. 00:05:23.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.743 EAL: Restoring previous memory policy: 0 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was expanded by 2MB 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:23.743 EAL: Mem event callback 'spdk:(nil)' registered 00:05:23.743 00:05:23.743 00:05:23.743 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.743 http://cunit.sourceforge.net/ 00:05:23.743 00:05:23.743 00:05:23.743 Suite: components_suite 00:05:23.743 Test: vtophys_malloc_test ...passed 00:05:23.743 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.743 EAL: Restoring previous memory policy: 4 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.743 EAL: Trying to obtain current memory policy. 00:05:23.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.743 EAL: Restoring previous memory policy: 4 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.743 EAL: Trying to obtain current memory policy. 00:05:23.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.743 EAL: Restoring previous memory policy: 4 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.743 EAL: Trying to obtain current memory policy. 00:05:23.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.743 EAL: Restoring previous memory policy: 4 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.743 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.743 EAL: request: mp_malloc_sync 00:05:23.743 EAL: No shared files mode enabled, IPC is disabled 00:05:23.743 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.744 EAL: Trying to obtain current memory policy. 00:05:23.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.744 EAL: Restoring previous memory policy: 4 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.744 EAL: Trying to obtain current memory policy. 00:05:23.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.744 EAL: Restoring previous memory policy: 4 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.744 EAL: Trying to obtain current memory policy. 00:05:23.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.744 EAL: Restoring previous memory policy: 4 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.744 EAL: Trying to obtain current memory policy. 00:05:23.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.744 EAL: Restoring previous memory policy: 4 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.744 EAL: request: mp_malloc_sync 00:05:23.744 EAL: No shared files mode enabled, IPC is disabled 00:05:23.744 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.003 EAL: request: mp_malloc_sync 00:05:24.003 EAL: No shared files mode enabled, IPC is disabled 00:05:24.003 EAL: Heap on socket 0 was shrunk by 258MB 00:05:24.003 EAL: Trying to obtain current memory policy. 00:05:24.003 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.003 EAL: Restoring previous memory policy: 4 00:05:24.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.003 EAL: request: mp_malloc_sync 00:05:24.003 EAL: No shared files mode enabled, IPC is disabled 00:05:24.003 EAL: Heap on socket 0 was expanded by 514MB 00:05:24.003 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.003 EAL: request: mp_malloc_sync 00:05:24.003 EAL: No shared files mode enabled, IPC is disabled 00:05:24.003 EAL: Heap on socket 0 was shrunk by 514MB 00:05:24.003 EAL: Trying to obtain current memory policy. 00:05:24.003 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.263 EAL: Restoring previous memory policy: 4 00:05:24.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.263 EAL: request: mp_malloc_sync 00:05:24.263 EAL: No shared files mode enabled, IPC is disabled 00:05:24.263 EAL: Heap on socket 0 was expanded by 1026MB 00:05:24.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.522 EAL: request: mp_malloc_sync 00:05:24.522 EAL: No shared files mode enabled, IPC is disabled 00:05:24.522 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:24.522 passed 00:05:24.522 00:05:24.522 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.522 suites 1 1 n/a 0 0 00:05:24.522 tests 2 2 2 0 0 00:05:24.522 asserts 497 497 497 0 n/a 00:05:24.522 00:05:24.522 Elapsed time = 0.948 seconds 00:05:24.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.522 EAL: request: mp_malloc_sync 00:05:24.522 EAL: No shared files mode enabled, IPC is disabled 00:05:24.522 EAL: Heap on socket 0 was shrunk by 2MB 00:05:24.522 EAL: No shared files mode enabled, IPC is disabled 00:05:24.522 EAL: No shared files mode enabled, IPC is disabled 00:05:24.523 EAL: No shared files mode enabled, IPC is disabled 00:05:24.523 00:05:24.523 real 0m1.056s 00:05:24.523 user 0m0.622s 00:05:24.523 sys 0m0.409s 00:05:24.523 10:59:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.523 10:59:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.523 ************************************ 00:05:24.523 END TEST env_vtophys 00:05:24.523 ************************************ 00:05:24.781 10:59:45 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.781 10:59:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.781 10:59:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.781 10:59:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.781 ************************************ 00:05:24.781 START TEST env_pci 00:05:24.781 ************************************ 00:05:24.781 10:59:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.781 00:05:24.781 00:05:24.781 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.781 http://cunit.sourceforge.net/ 00:05:24.781 00:05:24.781 00:05:24.781 Suite: pci 00:05:24.781 Test: pci_hook ...[2024-12-13 10:59:45.119195] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1437878 has claimed it 00:05:24.781 EAL: Cannot find device (10000:00:01.0) 00:05:24.781 EAL: Failed to attach device on primary process 00:05:24.781 passed 00:05:24.781 00:05:24.781 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.781 suites 1 1 n/a 0 0 00:05:24.781 tests 1 1 1 0 0 00:05:24.781 asserts 25 25 25 0 n/a 00:05:24.781 00:05:24.781 Elapsed time = 0.025 seconds 00:05:24.781 00:05:24.781 real 0m0.044s 00:05:24.781 user 0m0.013s 00:05:24.781 sys 0m0.030s 00:05:24.781 10:59:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.781 10:59:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.781 ************************************ 00:05:24.781 END TEST env_pci 00:05:24.781 ************************************ 00:05:24.781 10:59:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.781 10:59:45 -- env/env.sh@15 -- # uname 00:05:24.781 10:59:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.781 10:59:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.781 10:59:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.781 10:59:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:24.781 10:59:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.781 10:59:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.781 ************************************ 00:05:24.781 START TEST env_dpdk_post_init 00:05:24.781 ************************************ 00:05:24.781 10:59:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.781 EAL: Detected CPU lcores: 112 00:05:24.781 EAL: Detected NUMA nodes: 2 00:05:24.781 EAL: Detected shared linkage of DPDK 00:05:24.781 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.781 EAL: Selected IOVA mode 'VA' 00:05:24.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.781 EAL: VFIO support initialized 00:05:24.781 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.781 EAL: Using IOMMU type 1 (Type 1) 00:05:24.781 EAL: Ignore mapping IO port bar(1) 00:05:24.781 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:24.781 EAL: Ignore mapping IO port bar(1) 00:05:24.781 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:24.781 EAL: Ignore mapping IO port bar(1) 00:05:24.781 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:24.781 EAL: Ignore mapping IO port bar(1) 00:05:24.781 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:24.781 EAL: Ignore mapping IO port bar(1) 00:05:24.781 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:25.041 EAL: Ignore mapping IO port bar(1) 00:05:25.041 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:25.987 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:31.260 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:31.261 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:31.520 Starting DPDK initialization... 00:05:31.520 Starting SPDK post initialization... 00:05:31.520 SPDK NVMe probe 00:05:31.520 Attaching to 0000:d8:00.0 00:05:31.520 Attached to 0000:d8:00.0 00:05:31.520 Cleaning up... 00:05:31.520 00:05:31.520 real 0m6.676s 00:05:31.520 user 0m5.394s 00:05:31.520 sys 0m0.343s 00:05:31.520 10:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.520 10:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.520 ************************************ 00:05:31.520 END TEST env_dpdk_post_init 00:05:31.520 ************************************ 00:05:31.520 10:59:51 -- env/env.sh@26 -- # uname 00:05:31.520 10:59:51 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.520 10:59:51 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.520 10:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.520 10:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.520 10:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.520 ************************************ 00:05:31.520 START TEST env_mem_callbacks 00:05:31.520 ************************************ 00:05:31.520 10:59:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.520 EAL: Detected CPU lcores: 112 00:05:31.520 EAL: Detected NUMA nodes: 2 00:05:31.520 EAL: Detected shared linkage of DPDK 00:05:31.520 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.520 EAL: Selected IOVA mode 'VA' 00:05:31.520 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.520 EAL: VFIO support initialized 00:05:31.520 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.520 00:05:31.520 00:05:31.520 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.520 http://cunit.sourceforge.net/ 00:05:31.520 00:05:31.520 00:05:31.520 Suite: memory 00:05:31.520 Test: test ... 00:05:31.520 register 0x200000200000 2097152 00:05:31.520 malloc 3145728 00:05:31.520 register 0x200000400000 4194304 00:05:31.520 buf 0x200000500000 len 3145728 PASSED 00:05:31.520 malloc 64 00:05:31.520 buf 0x2000004fff40 len 64 PASSED 00:05:31.520 malloc 4194304 00:05:31.520 register 0x200000800000 6291456 00:05:31.520 buf 0x200000a00000 len 4194304 PASSED 00:05:31.520 free 0x200000500000 3145728 00:05:31.520 free 0x2000004fff40 64 00:05:31.520 unregister 0x200000400000 4194304 PASSED 00:05:31.520 free 0x200000a00000 4194304 00:05:31.520 unregister 0x200000800000 6291456 PASSED 00:05:31.520 malloc 8388608 00:05:31.520 register 0x200000400000 10485760 00:05:31.520 buf 0x200000600000 len 8388608 PASSED 00:05:31.520 free 0x200000600000 8388608 00:05:31.520 unregister 0x200000400000 10485760 PASSED 00:05:31.520 passed 00:05:31.520 00:05:31.520 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.520 suites 1 1 n/a 0 0 00:05:31.520 tests 1 1 1 0 0 00:05:31.520 asserts 15 15 15 0 n/a 00:05:31.520 00:05:31.520 Elapsed time = 0.004 seconds 00:05:31.520 00:05:31.520 real 0m0.053s 00:05:31.520 user 0m0.021s 00:05:31.521 sys 0m0.032s 00:05:31.521 10:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.521 10:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 ************************************ 00:05:31.521 END TEST env_mem_callbacks 00:05:31.521 ************************************ 00:05:31.521 00:05:31.521 real 0m8.317s 00:05:31.521 user 0m6.344s 00:05:31.521 sys 0m1.046s 00:05:31.521 10:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.521 10:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 ************************************ 00:05:31.521 END TEST env 00:05:31.521 ************************************ 00:05:31.521 10:59:52 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.521 10:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.521 10:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.521 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 ************************************ 00:05:31.521 START TEST rpc 00:05:31.521 ************************************ 00:05:31.521 10:59:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.780 * Looking for test storage... 00:05:31.780 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:31.780 10:59:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.780 10:59:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.780 10:59:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.780 10:59:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.780 10:59:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.780 10:59:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.780 10:59:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.780 10:59:52 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.780 10:59:52 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.780 10:59:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.780 10:59:52 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.780 10:59:52 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.780 10:59:52 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.780 10:59:52 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.780 10:59:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.780 10:59:52 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.780 10:59:52 -- scripts/common.sh@344 -- # : 1 00:05:31.780 10:59:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.780 10:59:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.780 10:59:52 -- scripts/common.sh@364 -- # decimal 1 00:05:31.780 10:59:52 -- scripts/common.sh@352 -- # local d=1 00:05:31.780 10:59:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.780 10:59:52 -- scripts/common.sh@354 -- # echo 1 00:05:31.780 10:59:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.780 10:59:52 -- scripts/common.sh@365 -- # decimal 2 00:05:31.780 10:59:52 -- scripts/common.sh@352 -- # local d=2 00:05:31.780 10:59:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.781 10:59:52 -- scripts/common.sh@354 -- # echo 2 00:05:31.781 10:59:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.781 10:59:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.781 10:59:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.781 10:59:52 -- scripts/common.sh@367 -- # return 0 00:05:31.781 10:59:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.781 10:59:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.781 --rc genhtml_branch_coverage=1 00:05:31.781 --rc genhtml_function_coverage=1 00:05:31.781 --rc genhtml_legend=1 00:05:31.781 --rc geninfo_all_blocks=1 00:05:31.781 --rc geninfo_unexecuted_blocks=1 00:05:31.781 00:05:31.781 ' 00:05:31.781 10:59:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.781 --rc genhtml_branch_coverage=1 00:05:31.781 --rc genhtml_function_coverage=1 00:05:31.781 --rc genhtml_legend=1 00:05:31.781 --rc geninfo_all_blocks=1 00:05:31.781 --rc geninfo_unexecuted_blocks=1 00:05:31.781 00:05:31.781 ' 00:05:31.781 10:59:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.781 --rc genhtml_branch_coverage=1 00:05:31.781 --rc genhtml_function_coverage=1 00:05:31.781 --rc genhtml_legend=1 00:05:31.781 --rc geninfo_all_blocks=1 00:05:31.781 --rc geninfo_unexecuted_blocks=1 00:05:31.781 00:05:31.781 ' 00:05:31.781 10:59:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.781 --rc genhtml_branch_coverage=1 00:05:31.781 --rc genhtml_function_coverage=1 00:05:31.781 --rc genhtml_legend=1 00:05:31.781 --rc geninfo_all_blocks=1 00:05:31.781 --rc geninfo_unexecuted_blocks=1 00:05:31.781 00:05:31.781 ' 00:05:31.781 10:59:52 -- rpc/rpc.sh@65 -- # spdk_pid=1439331 00:05:31.781 10:59:52 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.781 10:59:52 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:31.781 10:59:52 -- rpc/rpc.sh@67 -- # waitforlisten 1439331 00:05:31.781 10:59:52 -- common/autotest_common.sh@829 -- # '[' -z 1439331 ']' 00:05:31.781 10:59:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.781 10:59:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.781 10:59:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.781 10:59:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.781 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.781 [2024-12-13 10:59:52.226777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.781 [2024-12-13 10:59:52.226820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439331 ] 00:05:31.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.781 [2024-12-13 10:59:52.285619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.040 [2024-12-13 10:59:52.374680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.040 [2024-12-13 10:59:52.374807] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.040 [2024-12-13 10:59:52.374819] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1439331' to capture a snapshot of events at runtime. 00:05:32.040 [2024-12-13 10:59:52.374827] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1439331 for offline analysis/debug. 00:05:32.040 [2024-12-13 10:59:52.374850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.608 10:59:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.608 10:59:53 -- common/autotest_common.sh@862 -- # return 0 00:05:32.608 10:59:53 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:32.608 10:59:53 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:32.608 10:59:53 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.608 10:59:53 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.608 10:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.608 10:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.608 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 ************************************ 00:05:32.608 START TEST rpc_integrity 00:05:32.608 ************************************ 00:05:32.608 10:59:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:32.608 10:59:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.608 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.608 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.608 10:59:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.608 10:59:53 -- rpc/rpc.sh@13 -- # jq length 00:05:32.608 10:59:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.608 10:59:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.608 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.608 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.608 10:59:53 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.608 10:59:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.608 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.608 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.608 10:59:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.608 { 00:05:32.608 "name": "Malloc0", 00:05:32.608 "aliases": [ 00:05:32.608 "cf521418-5ea4-4b20-9e80-5871984809c3" 00:05:32.608 ], 00:05:32.608 "product_name": "Malloc disk", 00:05:32.608 "block_size": 512, 00:05:32.608 "num_blocks": 16384, 00:05:32.608 "uuid": "cf521418-5ea4-4b20-9e80-5871984809c3", 00:05:32.608 "assigned_rate_limits": { 00:05:32.608 "rw_ios_per_sec": 0, 00:05:32.608 "rw_mbytes_per_sec": 0, 00:05:32.608 "r_mbytes_per_sec": 0, 00:05:32.608 "w_mbytes_per_sec": 0 00:05:32.608 }, 00:05:32.608 "claimed": false, 00:05:32.608 "zoned": false, 00:05:32.608 "supported_io_types": { 00:05:32.608 "read": true, 00:05:32.608 "write": true, 00:05:32.608 "unmap": true, 00:05:32.608 "write_zeroes": true, 00:05:32.608 "flush": true, 00:05:32.608 "reset": true, 00:05:32.608 "compare": false, 00:05:32.608 "compare_and_write": false, 00:05:32.608 "abort": true, 00:05:32.608 "nvme_admin": false, 00:05:32.608 "nvme_io": false 00:05:32.608 }, 00:05:32.608 "memory_domains": [ 00:05:32.608 { 00:05:32.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.608 "dma_device_type": 2 00:05:32.608 } 00:05:32.608 ], 00:05:32.608 "driver_specific": {} 00:05:32.608 } 00:05:32.608 ]' 00:05:32.608 10:59:53 -- rpc/rpc.sh@17 -- # jq length 00:05:32.608 10:59:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.608 10:59:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:32.608 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.608 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 [2024-12-13 10:59:53.171120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:32.608 [2024-12-13 10:59:53.171152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.608 [2024-12-13 10:59:53.171164] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1367900 00:05:32.608 [2024-12-13 10:59:53.171170] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.608 [2024-12-13 10:59:53.172171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.608 [2024-12-13 10:59:53.172193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.608 Passthru0 00:05:32.608 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.609 10:59:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.867 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.867 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.867 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.867 10:59:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.867 { 00:05:32.867 "name": "Malloc0", 00:05:32.867 "aliases": [ 00:05:32.867 "cf521418-5ea4-4b20-9e80-5871984809c3" 00:05:32.867 ], 00:05:32.867 "product_name": "Malloc disk", 00:05:32.867 "block_size": 512, 00:05:32.867 "num_blocks": 16384, 00:05:32.867 "uuid": "cf521418-5ea4-4b20-9e80-5871984809c3", 00:05:32.867 "assigned_rate_limits": { 00:05:32.867 "rw_ios_per_sec": 0, 00:05:32.867 "rw_mbytes_per_sec": 0, 00:05:32.867 "r_mbytes_per_sec": 0, 00:05:32.867 "w_mbytes_per_sec": 0 00:05:32.867 }, 00:05:32.868 "claimed": true, 00:05:32.868 "claim_type": "exclusive_write", 00:05:32.868 "zoned": false, 00:05:32.868 "supported_io_types": { 00:05:32.868 "read": true, 00:05:32.868 "write": true, 00:05:32.868 "unmap": true, 00:05:32.868 "write_zeroes": true, 00:05:32.868 "flush": true, 00:05:32.868 "reset": true, 00:05:32.868 "compare": false, 00:05:32.868 "compare_and_write": false, 00:05:32.868 "abort": true, 00:05:32.868 "nvme_admin": false, 00:05:32.868 "nvme_io": false 00:05:32.868 }, 00:05:32.868 "memory_domains": [ 00:05:32.868 { 00:05:32.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.868 "dma_device_type": 2 00:05:32.868 } 00:05:32.868 ], 00:05:32.868 "driver_specific": {} 00:05:32.868 }, 00:05:32.868 { 00:05:32.868 "name": "Passthru0", 00:05:32.868 "aliases": [ 00:05:32.868 "c98b8898-922c-539c-bfc9-0c9adad4b034" 00:05:32.868 ], 00:05:32.868 "product_name": "passthru", 00:05:32.868 "block_size": 512, 00:05:32.868 "num_blocks": 16384, 00:05:32.868 "uuid": "c98b8898-922c-539c-bfc9-0c9adad4b034", 00:05:32.868 "assigned_rate_limits": { 00:05:32.868 "rw_ios_per_sec": 0, 00:05:32.868 "rw_mbytes_per_sec": 0, 00:05:32.868 "r_mbytes_per_sec": 0, 00:05:32.868 "w_mbytes_per_sec": 0 00:05:32.868 }, 00:05:32.868 "claimed": false, 00:05:32.868 "zoned": false, 00:05:32.868 "supported_io_types": { 00:05:32.868 "read": true, 00:05:32.868 "write": true, 00:05:32.868 "unmap": true, 00:05:32.868 "write_zeroes": true, 00:05:32.868 "flush": true, 00:05:32.868 "reset": true, 00:05:32.868 "compare": false, 00:05:32.868 "compare_and_write": false, 00:05:32.868 "abort": true, 00:05:32.868 "nvme_admin": false, 00:05:32.868 "nvme_io": false 00:05:32.868 }, 00:05:32.868 "memory_domains": [ 00:05:32.868 { 00:05:32.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.868 "dma_device_type": 2 00:05:32.868 } 00:05:32.868 ], 00:05:32.868 "driver_specific": { 00:05:32.868 "passthru": { 00:05:32.868 "name": "Passthru0", 00:05:32.868 "base_bdev_name": "Malloc0" 00:05:32.868 } 00:05:32.868 } 00:05:32.868 } 00:05:32.868 ]' 00:05:32.868 10:59:53 -- rpc/rpc.sh@21 -- # jq length 00:05:32.868 10:59:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.868 10:59:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.868 10:59:53 -- rpc/rpc.sh@26 -- # jq length 00:05:32.868 10:59:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.868 00:05:32.868 real 0m0.246s 00:05:32.868 user 0m0.154s 00:05:32.868 sys 0m0.034s 00:05:32.868 10:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 ************************************ 00:05:32.868 END TEST rpc_integrity 00:05:32.868 ************************************ 00:05:32.868 10:59:53 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:32.868 10:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.868 10:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 ************************************ 00:05:32.868 START TEST rpc_plugins 00:05:32.868 ************************************ 00:05:32.868 10:59:53 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:32.868 10:59:53 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:32.868 10:59:53 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:32.868 { 00:05:32.868 "name": "Malloc1", 00:05:32.868 "aliases": [ 00:05:32.868 "65953fe6-ce15-4d24-bdbe-87a109f31efe" 00:05:32.868 ], 00:05:32.868 "product_name": "Malloc disk", 00:05:32.868 "block_size": 4096, 00:05:32.868 "num_blocks": 256, 00:05:32.868 "uuid": "65953fe6-ce15-4d24-bdbe-87a109f31efe", 00:05:32.868 "assigned_rate_limits": { 00:05:32.868 "rw_ios_per_sec": 0, 00:05:32.868 "rw_mbytes_per_sec": 0, 00:05:32.868 "r_mbytes_per_sec": 0, 00:05:32.868 "w_mbytes_per_sec": 0 00:05:32.868 }, 00:05:32.868 "claimed": false, 00:05:32.868 "zoned": false, 00:05:32.868 "supported_io_types": { 00:05:32.868 "read": true, 00:05:32.868 "write": true, 00:05:32.868 "unmap": true, 00:05:32.868 "write_zeroes": true, 00:05:32.868 "flush": true, 00:05:32.868 "reset": true, 00:05:32.868 "compare": false, 00:05:32.868 "compare_and_write": false, 00:05:32.868 "abort": true, 00:05:32.868 "nvme_admin": false, 00:05:32.868 "nvme_io": false 00:05:32.868 }, 00:05:32.868 "memory_domains": [ 00:05:32.868 { 00:05:32.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.868 "dma_device_type": 2 00:05:32.868 } 00:05:32.868 ], 00:05:32.868 "driver_specific": {} 00:05:32.868 } 00:05:32.868 ]' 00:05:32.868 10:59:53 -- rpc/rpc.sh@32 -- # jq length 00:05:32.868 10:59:53 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:32.868 10:59:53 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:32.868 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.868 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:32.868 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.868 10:59:53 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:32.868 10:59:53 -- rpc/rpc.sh@36 -- # jq length 00:05:33.127 10:59:53 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.127 00:05:33.127 real 0m0.116s 00:05:33.127 user 0m0.077s 00:05:33.127 sys 0m0.010s 00:05:33.127 10:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.127 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.127 ************************************ 00:05:33.127 END TEST rpc_plugins 00:05:33.127 ************************************ 00:05:33.127 10:59:53 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.127 10:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.127 10:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.127 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.127 ************************************ 00:05:33.127 START TEST rpc_trace_cmd_test 00:05:33.127 ************************************ 00:05:33.127 10:59:53 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:33.127 10:59:53 -- rpc/rpc.sh@40 -- # local info 00:05:33.127 10:59:53 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.127 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.127 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.127 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.127 10:59:53 -- rpc/rpc.sh@42 -- # info='{ 00:05:33.127 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1439331", 00:05:33.127 "tpoint_group_mask": "0x8", 00:05:33.127 "iscsi_conn": { 00:05:33.127 "mask": "0x2", 00:05:33.127 "tpoint_mask": "0x0" 00:05:33.127 }, 00:05:33.127 "scsi": { 00:05:33.127 "mask": "0x4", 00:05:33.127 "tpoint_mask": "0x0" 00:05:33.127 }, 00:05:33.127 "bdev": { 00:05:33.127 "mask": "0x8", 00:05:33.127 "tpoint_mask": "0xffffffffffffffff" 00:05:33.127 }, 00:05:33.127 "nvmf_rdma": { 00:05:33.127 "mask": "0x10", 00:05:33.127 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "nvmf_tcp": { 00:05:33.128 "mask": "0x20", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "ftl": { 00:05:33.128 "mask": "0x40", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "blobfs": { 00:05:33.128 "mask": "0x80", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "dsa": { 00:05:33.128 "mask": "0x200", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "thread": { 00:05:33.128 "mask": "0x400", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "nvme_pcie": { 00:05:33.128 "mask": "0x800", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "iaa": { 00:05:33.128 "mask": "0x1000", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "nvme_tcp": { 00:05:33.128 "mask": "0x2000", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 }, 00:05:33.128 "bdev_nvme": { 00:05:33.128 "mask": "0x4000", 00:05:33.128 "tpoint_mask": "0x0" 00:05:33.128 } 00:05:33.128 }' 00:05:33.128 10:59:53 -- rpc/rpc.sh@43 -- # jq length 00:05:33.128 10:59:53 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:33.128 10:59:53 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.128 10:59:53 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.128 10:59:53 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.128 10:59:53 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.128 10:59:53 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.128 10:59:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.128 10:59:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.387 10:59:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.387 00:05:33.387 real 0m0.220s 00:05:33.387 user 0m0.182s 00:05:33.387 sys 0m0.031s 00:05:33.387 10:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 ************************************ 00:05:33.388 END TEST rpc_trace_cmd_test 00:05:33.388 ************************************ 00:05:33.388 10:59:53 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.388 10:59:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.388 10:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.388 10:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 ************************************ 00:05:33.388 START TEST rpc_daemon_integrity 00:05:33.388 ************************************ 00:05:33.388 10:59:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:33.388 10:59:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.388 10:59:53 -- rpc/rpc.sh@13 -- # jq length 00:05:33.388 10:59:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.388 10:59:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.388 10:59:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.388 { 00:05:33.388 "name": "Malloc2", 00:05:33.388 "aliases": [ 00:05:33.388 "1b60d6a6-a04e-4294-9820-6b35ca7bb194" 00:05:33.388 ], 00:05:33.388 "product_name": "Malloc disk", 00:05:33.388 "block_size": 512, 00:05:33.388 "num_blocks": 16384, 00:05:33.388 "uuid": "1b60d6a6-a04e-4294-9820-6b35ca7bb194", 00:05:33.388 "assigned_rate_limits": { 00:05:33.388 "rw_ios_per_sec": 0, 00:05:33.388 "rw_mbytes_per_sec": 0, 00:05:33.388 "r_mbytes_per_sec": 0, 00:05:33.388 "w_mbytes_per_sec": 0 00:05:33.388 }, 00:05:33.388 "claimed": false, 00:05:33.388 "zoned": false, 00:05:33.388 "supported_io_types": { 00:05:33.388 "read": true, 00:05:33.388 "write": true, 00:05:33.388 "unmap": true, 00:05:33.388 "write_zeroes": true, 00:05:33.388 "flush": true, 00:05:33.388 "reset": true, 00:05:33.388 "compare": false, 00:05:33.388 "compare_and_write": false, 00:05:33.388 "abort": true, 00:05:33.388 "nvme_admin": false, 00:05:33.388 "nvme_io": false 00:05:33.388 }, 00:05:33.388 "memory_domains": [ 00:05:33.388 { 00:05:33.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.388 "dma_device_type": 2 00:05:33.388 } 00:05:33.388 ], 00:05:33.388 "driver_specific": {} 00:05:33.388 } 00:05:33.388 ]' 00:05:33.388 10:59:53 -- rpc/rpc.sh@17 -- # jq length 00:05:33.388 10:59:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.388 10:59:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 [2024-12-13 10:59:53.868949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.388 [2024-12-13 10:59:53.868976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.388 [2024-12-13 10:59:53.868987] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1507240 00:05:33.388 [2024-12-13 10:59:53.868993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.388 [2024-12-13 10:59:53.869863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.388 [2024-12-13 10:59:53.869883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.388 Passthru0 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.388 { 00:05:33.388 "name": "Malloc2", 00:05:33.388 "aliases": [ 00:05:33.388 "1b60d6a6-a04e-4294-9820-6b35ca7bb194" 00:05:33.388 ], 00:05:33.388 "product_name": "Malloc disk", 00:05:33.388 "block_size": 512, 00:05:33.388 "num_blocks": 16384, 00:05:33.388 "uuid": "1b60d6a6-a04e-4294-9820-6b35ca7bb194", 00:05:33.388 "assigned_rate_limits": { 00:05:33.388 "rw_ios_per_sec": 0, 00:05:33.388 "rw_mbytes_per_sec": 0, 00:05:33.388 "r_mbytes_per_sec": 0, 00:05:33.388 "w_mbytes_per_sec": 0 00:05:33.388 }, 00:05:33.388 "claimed": true, 00:05:33.388 "claim_type": "exclusive_write", 00:05:33.388 "zoned": false, 00:05:33.388 "supported_io_types": { 00:05:33.388 "read": true, 00:05:33.388 "write": true, 00:05:33.388 "unmap": true, 00:05:33.388 "write_zeroes": true, 00:05:33.388 "flush": true, 00:05:33.388 "reset": true, 00:05:33.388 "compare": false, 00:05:33.388 "compare_and_write": false, 00:05:33.388 "abort": true, 00:05:33.388 "nvme_admin": false, 00:05:33.388 "nvme_io": false 00:05:33.388 }, 00:05:33.388 "memory_domains": [ 00:05:33.388 { 00:05:33.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.388 "dma_device_type": 2 00:05:33.388 } 00:05:33.388 ], 00:05:33.388 "driver_specific": {} 00:05:33.388 }, 00:05:33.388 { 00:05:33.388 "name": "Passthru0", 00:05:33.388 "aliases": [ 00:05:33.388 "96e95861-4a22-5214-ac48-5566e3278a63" 00:05:33.388 ], 00:05:33.388 "product_name": "passthru", 00:05:33.388 "block_size": 512, 00:05:33.388 "num_blocks": 16384, 00:05:33.388 "uuid": "96e95861-4a22-5214-ac48-5566e3278a63", 00:05:33.388 "assigned_rate_limits": { 00:05:33.388 "rw_ios_per_sec": 0, 00:05:33.388 "rw_mbytes_per_sec": 0, 00:05:33.388 "r_mbytes_per_sec": 0, 00:05:33.388 "w_mbytes_per_sec": 0 00:05:33.388 }, 00:05:33.388 "claimed": false, 00:05:33.388 "zoned": false, 00:05:33.388 "supported_io_types": { 00:05:33.388 "read": true, 00:05:33.388 "write": true, 00:05:33.388 "unmap": true, 00:05:33.388 "write_zeroes": true, 00:05:33.388 "flush": true, 00:05:33.388 "reset": true, 00:05:33.388 "compare": false, 00:05:33.388 "compare_and_write": false, 00:05:33.388 "abort": true, 00:05:33.388 "nvme_admin": false, 00:05:33.388 "nvme_io": false 00:05:33.388 }, 00:05:33.388 "memory_domains": [ 00:05:33.388 { 00:05:33.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.388 "dma_device_type": 2 00:05:33.388 } 00:05:33.388 ], 00:05:33.388 "driver_specific": { 00:05:33.388 "passthru": { 00:05:33.388 "name": "Passthru0", 00:05:33.388 "base_bdev_name": "Malloc2" 00:05:33.388 } 00:05:33.388 } 00:05:33.388 } 00:05:33.388 ]' 00:05:33.388 10:59:53 -- rpc/rpc.sh@21 -- # jq length 00:05:33.388 10:59:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.388 10:59:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.388 10:59:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.388 10:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.388 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.648 10:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.648 10:59:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.648 10:59:53 -- rpc/rpc.sh@26 -- # jq length 00:05:33.648 10:59:54 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.648 00:05:33.648 real 0m0.267s 00:05:33.648 user 0m0.175s 00:05:33.648 sys 0m0.033s 00:05:33.648 10:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.648 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.648 ************************************ 00:05:33.648 END TEST rpc_daemon_integrity 00:05:33.648 ************************************ 00:05:33.648 10:59:54 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.648 10:59:54 -- rpc/rpc.sh@84 -- # killprocess 1439331 00:05:33.648 10:59:54 -- common/autotest_common.sh@936 -- # '[' -z 1439331 ']' 00:05:33.648 10:59:54 -- common/autotest_common.sh@940 -- # kill -0 1439331 00:05:33.648 10:59:54 -- common/autotest_common.sh@941 -- # uname 00:05:33.648 10:59:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.648 10:59:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1439331 00:05:33.648 10:59:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.648 10:59:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.648 10:59:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1439331' 00:05:33.648 killing process with pid 1439331 00:05:33.648 10:59:54 -- common/autotest_common.sh@955 -- # kill 1439331 00:05:33.648 10:59:54 -- common/autotest_common.sh@960 -- # wait 1439331 00:05:33.907 00:05:33.907 real 0m2.398s 00:05:33.907 user 0m3.050s 00:05:33.907 sys 0m0.617s 00:05:33.907 10:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.907 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.907 ************************************ 00:05:33.907 END TEST rpc 00:05:33.907 ************************************ 00:05:33.907 10:59:54 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.907 10:59:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.907 10:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.907 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:33.907 ************************************ 00:05:33.907 START TEST rpc_client 00:05:33.907 ************************************ 00:05:33.907 10:59:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:34.166 * Looking for test storage... 00:05:34.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:34.167 10:59:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.167 10:59:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.167 10:59:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.167 10:59:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.167 10:59:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.167 10:59:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.167 10:59:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.167 10:59:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.167 10:59:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.167 10:59:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.167 10:59:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.167 10:59:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.167 10:59:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.167 10:59:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.167 10:59:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.167 10:59:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.167 10:59:54 -- scripts/common.sh@344 -- # : 1 00:05:34.167 10:59:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.167 10:59:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.167 10:59:54 -- scripts/common.sh@364 -- # decimal 1 00:05:34.167 10:59:54 -- scripts/common.sh@352 -- # local d=1 00:05:34.167 10:59:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.167 10:59:54 -- scripts/common.sh@354 -- # echo 1 00:05:34.167 10:59:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.167 10:59:54 -- scripts/common.sh@365 -- # decimal 2 00:05:34.167 10:59:54 -- scripts/common.sh@352 -- # local d=2 00:05:34.167 10:59:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.167 10:59:54 -- scripts/common.sh@354 -- # echo 2 00:05:34.167 10:59:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.167 10:59:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.167 10:59:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.167 10:59:54 -- scripts/common.sh@367 -- # return 0 00:05:34.167 10:59:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.167 10:59:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.167 --rc genhtml_branch_coverage=1 00:05:34.167 --rc genhtml_function_coverage=1 00:05:34.167 --rc genhtml_legend=1 00:05:34.167 --rc geninfo_all_blocks=1 00:05:34.167 --rc geninfo_unexecuted_blocks=1 00:05:34.167 00:05:34.167 ' 00:05:34.167 10:59:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.167 --rc genhtml_branch_coverage=1 00:05:34.167 --rc genhtml_function_coverage=1 00:05:34.167 --rc genhtml_legend=1 00:05:34.167 --rc geninfo_all_blocks=1 00:05:34.167 --rc geninfo_unexecuted_blocks=1 00:05:34.167 00:05:34.167 ' 00:05:34.167 10:59:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.167 --rc genhtml_branch_coverage=1 00:05:34.167 --rc genhtml_function_coverage=1 00:05:34.167 --rc genhtml_legend=1 00:05:34.167 --rc geninfo_all_blocks=1 00:05:34.167 --rc geninfo_unexecuted_blocks=1 00:05:34.167 00:05:34.167 ' 00:05:34.167 10:59:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.167 --rc genhtml_branch_coverage=1 00:05:34.167 --rc genhtml_function_coverage=1 00:05:34.167 --rc genhtml_legend=1 00:05:34.167 --rc geninfo_all_blocks=1 00:05:34.167 --rc geninfo_unexecuted_blocks=1 00:05:34.167 00:05:34.167 ' 00:05:34.167 10:59:54 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:34.167 OK 00:05:34.167 10:59:54 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.167 00:05:34.167 real 0m0.189s 00:05:34.167 user 0m0.099s 00:05:34.167 sys 0m0.101s 00:05:34.167 10:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.167 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.167 ************************************ 00:05:34.167 END TEST rpc_client 00:05:34.167 ************************************ 00:05:34.167 10:59:54 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:34.167 10:59:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.167 10:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.167 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.167 ************************************ 00:05:34.167 START TEST json_config 00:05:34.167 ************************************ 00:05:34.167 10:59:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:34.427 10:59:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.427 10:59:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.427 10:59:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.427 10:59:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.427 10:59:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.427 10:59:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.427 10:59:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.427 10:59:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.427 10:59:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.427 10:59:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.427 10:59:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.427 10:59:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.427 10:59:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.427 10:59:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.427 10:59:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.427 10:59:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.427 10:59:54 -- scripts/common.sh@344 -- # : 1 00:05:34.427 10:59:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.427 10:59:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.427 10:59:54 -- scripts/common.sh@364 -- # decimal 1 00:05:34.427 10:59:54 -- scripts/common.sh@352 -- # local d=1 00:05:34.427 10:59:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.427 10:59:54 -- scripts/common.sh@354 -- # echo 1 00:05:34.427 10:59:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.427 10:59:54 -- scripts/common.sh@365 -- # decimal 2 00:05:34.427 10:59:54 -- scripts/common.sh@352 -- # local d=2 00:05:34.427 10:59:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.427 10:59:54 -- scripts/common.sh@354 -- # echo 2 00:05:34.427 10:59:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.427 10:59:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.427 10:59:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.427 10:59:54 -- scripts/common.sh@367 -- # return 0 00:05:34.427 10:59:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.427 10:59:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.427 --rc genhtml_branch_coverage=1 00:05:34.427 --rc genhtml_function_coverage=1 00:05:34.427 --rc genhtml_legend=1 00:05:34.427 --rc geninfo_all_blocks=1 00:05:34.427 --rc geninfo_unexecuted_blocks=1 00:05:34.427 00:05:34.427 ' 00:05:34.427 10:59:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.427 --rc genhtml_branch_coverage=1 00:05:34.427 --rc genhtml_function_coverage=1 00:05:34.427 --rc genhtml_legend=1 00:05:34.427 --rc geninfo_all_blocks=1 00:05:34.427 --rc geninfo_unexecuted_blocks=1 00:05:34.427 00:05:34.427 ' 00:05:34.427 10:59:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.427 --rc genhtml_branch_coverage=1 00:05:34.427 --rc genhtml_function_coverage=1 00:05:34.427 --rc genhtml_legend=1 00:05:34.427 --rc geninfo_all_blocks=1 00:05:34.427 --rc geninfo_unexecuted_blocks=1 00:05:34.427 00:05:34.427 ' 00:05:34.427 10:59:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.427 --rc genhtml_branch_coverage=1 00:05:34.427 --rc genhtml_function_coverage=1 00:05:34.427 --rc genhtml_legend=1 00:05:34.427 --rc geninfo_all_blocks=1 00:05:34.427 --rc geninfo_unexecuted_blocks=1 00:05:34.427 00:05:34.427 ' 00:05:34.427 10:59:54 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.427 10:59:54 -- nvmf/common.sh@7 -- # uname -s 00:05:34.427 10:59:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.427 10:59:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.427 10:59:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.427 10:59:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.427 10:59:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.427 10:59:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.427 10:59:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.427 10:59:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.427 10:59:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.427 10:59:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.427 10:59:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:34.427 10:59:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:34.427 10:59:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.427 10:59:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.427 10:59:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.427 10:59:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:34.427 10:59:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.427 10:59:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.427 10:59:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.427 10:59:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.427 10:59:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.427 10:59:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.427 10:59:54 -- paths/export.sh@5 -- # export PATH 00:05:34.427 10:59:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.427 10:59:54 -- nvmf/common.sh@46 -- # : 0 00:05:34.427 10:59:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:34.427 10:59:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:34.427 10:59:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:34.427 10:59:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.427 10:59:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.427 10:59:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:34.427 10:59:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:34.427 10:59:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:34.427 10:59:54 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.427 10:59:54 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:34.427 10:59:54 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:34.427 10:59:54 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:34.427 10:59:54 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:34.427 10:59:54 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:34.427 10:59:54 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:34.427 10:59:54 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:34.427 10:59:54 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:34.427 10:59:54 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:34.427 10:59:54 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.427 10:59:54 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:34.427 INFO: JSON configuration test init 00:05:34.427 10:59:54 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:34.427 10:59:54 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:34.427 10:59:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.427 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.427 10:59:54 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:34.427 10:59:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.427 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.427 10:59:54 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:34.427 10:59:54 -- json_config/json_config.sh@98 -- # local app=target 00:05:34.427 10:59:54 -- json_config/json_config.sh@99 -- # shift 00:05:34.427 10:59:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:34.427 10:59:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.427 10:59:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=1440088 00:05:34.427 10:59:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:34.427 Waiting for target to run... 00:05:34.427 10:59:54 -- json_config/json_config.sh@114 -- # waitforlisten 1440088 /var/tmp/spdk_tgt.sock 00:05:34.427 10:59:54 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:34.427 10:59:54 -- common/autotest_common.sh@829 -- # '[' -z 1440088 ']' 00:05:34.427 10:59:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.427 10:59:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.427 10:59:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.427 10:59:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.427 10:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.427 [2024-12-13 10:59:54.903711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.427 [2024-12-13 10:59:54.903761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440088 ] 00:05:34.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.689 [2024-12-13 10:59:55.166561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.689 [2024-12-13 10:59:55.225610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.689 [2024-12-13 10:59:55.225700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.259 10:59:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.259 10:59:55 -- common/autotest_common.sh@862 -- # return 0 00:05:35.259 10:59:55 -- json_config/json_config.sh@115 -- # echo '' 00:05:35.259 00:05:35.259 10:59:55 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:35.259 10:59:55 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:35.259 10:59:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.259 10:59:55 -- common/autotest_common.sh@10 -- # set +x 00:05:35.259 10:59:55 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:35.259 10:59:55 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:35.259 10:59:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.259 10:59:55 -- common/autotest_common.sh@10 -- # set +x 00:05:35.259 10:59:55 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:35.259 10:59:55 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:35.259 10:59:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:38.621 10:59:58 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:38.621 10:59:58 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:38.621 10:59:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.621 10:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 10:59:58 -- json_config/json_config.sh@48 -- # local ret=0 00:05:38.621 10:59:58 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:38.621 10:59:58 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:38.621 10:59:58 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:38.621 10:59:58 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:38.621 10:59:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:38.621 10:59:58 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:38.621 10:59:58 -- json_config/json_config.sh@51 -- # local get_types 00:05:38.621 10:59:58 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:38.621 10:59:58 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:38.621 10:59:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.621 10:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 10:59:58 -- json_config/json_config.sh@58 -- # return 0 00:05:38.621 10:59:58 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:38.621 10:59:58 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:38.621 10:59:58 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:38.621 10:59:58 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:38.621 10:59:58 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:38.621 10:59:58 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:38.621 10:59:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.621 10:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:38.621 10:59:58 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:38.621 10:59:58 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:38.621 10:59:58 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:38.621 10:59:58 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:38.621 10:59:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:38.621 10:59:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:38.621 10:59:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:38.621 10:59:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:38.621 10:59:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:38.621 10:59:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:38.621 10:59:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:38.621 10:59:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:38.621 10:59:58 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:38.621 10:59:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:38.621 10:59:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:38.621 10:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.895 11:00:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:43.895 11:00:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:43.895 11:00:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:43.895 11:00:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:43.895 11:00:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:43.895 11:00:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:43.895 11:00:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:43.895 11:00:04 -- nvmf/common.sh@294 -- # net_devs=() 00:05:43.895 11:00:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:43.895 11:00:04 -- nvmf/common.sh@295 -- # e810=() 00:05:43.895 11:00:04 -- nvmf/common.sh@295 -- # local -ga e810 00:05:43.895 11:00:04 -- nvmf/common.sh@296 -- # x722=() 00:05:43.895 11:00:04 -- nvmf/common.sh@296 -- # local -ga x722 00:05:43.895 11:00:04 -- nvmf/common.sh@297 -- # mlx=() 00:05:43.895 11:00:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:43.895 11:00:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:43.895 11:00:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:43.895 11:00:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:43.895 11:00:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:43.895 11:00:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:43.895 11:00:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:43.895 11:00:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:43.895 11:00:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:43.895 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:43.895 11:00:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:43.895 11:00:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:43.895 11:00:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:43.895 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:43.895 11:00:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:43.895 11:00:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:43.895 11:00:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:43.895 11:00:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.895 11:00:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:43.895 11:00:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.895 11:00:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:43.895 Found net devices under 0000:18:00.0: mlx_0_0 00:05:43.895 11:00:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.895 11:00:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:43.895 11:00:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.895 11:00:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:43.895 11:00:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.895 11:00:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:43.895 Found net devices under 0000:18:00.1: mlx_0_1 00:05:43.895 11:00:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.895 11:00:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:43.895 11:00:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:43.895 11:00:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:43.895 11:00:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:43.895 11:00:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:43.896 11:00:04 -- nvmf/common.sh@57 -- # uname 00:05:43.896 11:00:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:43.896 11:00:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:43.896 11:00:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:44.155 11:00:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:44.155 11:00:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:44.155 11:00:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:44.155 11:00:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:44.155 11:00:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:44.155 11:00:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:44.155 11:00:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:44.155 11:00:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:44.155 11:00:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:44.155 11:00:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:44.155 11:00:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:44.155 11:00:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:44.155 11:00:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:44.155 11:00:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:44.155 11:00:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.155 11:00:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:44.155 11:00:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:44.155 11:00:04 -- nvmf/common.sh@104 -- # continue 2 00:05:44.155 11:00:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:44.155 11:00:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.155 11:00:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:44.155 11:00:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.155 11:00:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:44.155 11:00:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:44.155 11:00:04 -- nvmf/common.sh@104 -- # continue 2 00:05:44.155 11:00:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:44.155 11:00:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:44.155 11:00:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:44.155 11:00:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:44.155 11:00:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:44.155 11:00:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:44.155 11:00:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:44.155 11:00:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:44.155 11:00:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:44.155 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:44.155 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:44.155 altname enp24s0f0np0 00:05:44.155 altname ens785f0np0 00:05:44.155 inet 192.168.100.8/24 scope global mlx_0_0 00:05:44.155 valid_lft forever preferred_lft forever 00:05:44.155 11:00:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:44.155 11:00:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:44.155 11:00:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:44.155 11:00:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:44.155 11:00:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:44.155 11:00:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:44.155 11:00:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:44.155 11:00:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:44.155 11:00:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:44.155 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:44.155 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:44.155 altname enp24s0f1np1 00:05:44.155 altname ens785f1np1 00:05:44.155 inet 192.168.100.9/24 scope global mlx_0_1 00:05:44.155 valid_lft forever preferred_lft forever 00:05:44.155 11:00:04 -- nvmf/common.sh@410 -- # return 0 00:05:44.155 11:00:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:44.155 11:00:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:44.155 11:00:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:44.156 11:00:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:44.156 11:00:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:44.156 11:00:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:44.156 11:00:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:44.156 11:00:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:44.156 11:00:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:44.156 11:00:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:44.156 11:00:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:44.156 11:00:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.156 11:00:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:44.156 11:00:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:44.156 11:00:04 -- nvmf/common.sh@104 -- # continue 2 00:05:44.156 11:00:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:44.156 11:00:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.156 11:00:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:44.156 11:00:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.156 11:00:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:44.156 11:00:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:44.156 11:00:04 -- nvmf/common.sh@104 -- # continue 2 00:05:44.156 11:00:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:44.156 11:00:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:44.156 11:00:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:44.156 11:00:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:44.156 11:00:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:44.156 11:00:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:44.156 11:00:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:44.156 11:00:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:44.156 11:00:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:44.156 11:00:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:44.156 11:00:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:44.156 11:00:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:44.156 11:00:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:44.156 192.168.100.9' 00:05:44.156 11:00:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:44.156 192.168.100.9' 00:05:44.156 11:00:04 -- nvmf/common.sh@445 -- # head -n 1 00:05:44.156 11:00:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:44.156 11:00:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:44.156 192.168.100.9' 00:05:44.156 11:00:04 -- nvmf/common.sh@446 -- # tail -n +2 00:05:44.156 11:00:04 -- nvmf/common.sh@446 -- # head -n 1 00:05:44.156 11:00:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:44.156 11:00:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:44.156 11:00:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:44.156 11:00:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:44.156 11:00:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:44.156 11:00:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:44.156 11:00:04 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:44.156 11:00:04 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:44.156 11:00:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:44.414 MallocForNvmf0 00:05:44.414 11:00:04 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:44.415 11:00:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:44.415 MallocForNvmf1 00:05:44.415 11:00:04 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:44.415 11:00:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:44.672 [2024-12-13 11:00:05.111239] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:44.672 [2024-12-13 11:00:05.136592] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x25468f0/0x25535c0) succeed. 00:05:44.672 [2024-12-13 11:00:05.146827] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2548ae0/0x2594c60) succeed. 00:05:44.672 11:00:05 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.672 11:00:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.930 11:00:05 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.930 11:00:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:45.194 11:00:05 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:45.194 11:00:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:45.194 11:00:05 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:45.194 11:00:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:45.452 [2024-12-13 11:00:05.800251] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:45.452 11:00:05 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:45.452 11:00:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.452 11:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:45.452 11:00:05 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:45.452 11:00:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.452 11:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:45.452 11:00:05 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:45.452 11:00:05 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.452 11:00:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.712 MallocBdevForConfigChangeCheck 00:05:45.712 11:00:06 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:45.712 11:00:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.712 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.712 11:00:06 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:45.712 11:00:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.970 11:00:06 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:45.970 INFO: shutting down applications... 00:05:45.970 11:00:06 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:45.970 11:00:06 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:45.970 11:00:06 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:45.970 11:00:06 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.162 Calling clear_iscsi_subsystem 00:05:50.162 Calling clear_nvmf_subsystem 00:05:50.162 Calling clear_nbd_subsystem 00:05:50.162 Calling clear_ublk_subsystem 00:05:50.162 Calling clear_vhost_blk_subsystem 00:05:50.162 Calling clear_vhost_scsi_subsystem 00:05:50.162 Calling clear_scheduler_subsystem 00:05:50.162 Calling clear_bdev_subsystem 00:05:50.162 Calling clear_accel_subsystem 00:05:50.162 Calling clear_vmd_subsystem 00:05:50.162 Calling clear_sock_subsystem 00:05:50.162 Calling clear_iobuf_subsystem 00:05:50.162 11:00:10 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.162 11:00:10 -- json_config/json_config.sh@396 -- # count=100 00:05:50.162 11:00:10 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:50.162 11:00:10 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.162 11:00:10 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.163 11:00:10 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.163 11:00:10 -- json_config/json_config.sh@398 -- # break 00:05:50.163 11:00:10 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:50.163 11:00:10 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:50.163 11:00:10 -- json_config/json_config.sh@120 -- # local app=target 00:05:50.163 11:00:10 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:50.163 11:00:10 -- json_config/json_config.sh@124 -- # [[ -n 1440088 ]] 00:05:50.163 11:00:10 -- json_config/json_config.sh@127 -- # kill -SIGINT 1440088 00:05:50.163 11:00:10 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:50.163 11:00:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:50.163 11:00:10 -- json_config/json_config.sh@130 -- # kill -0 1440088 00:05:50.163 11:00:10 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:50.732 11:00:11 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:50.732 11:00:11 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:50.732 11:00:11 -- json_config/json_config.sh@130 -- # kill -0 1440088 00:05:50.732 11:00:11 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:50.732 11:00:11 -- json_config/json_config.sh@132 -- # break 00:05:50.732 11:00:11 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:50.732 11:00:11 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:50.732 SPDK target shutdown done 00:05:50.732 11:00:11 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:50.732 INFO: relaunching applications... 00:05:50.732 11:00:11 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.732 11:00:11 -- json_config/json_config.sh@98 -- # local app=target 00:05:50.732 11:00:11 -- json_config/json_config.sh@99 -- # shift 00:05:50.732 11:00:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:50.732 11:00:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:50.732 11:00:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:50.732 11:00:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:50.732 11:00:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:50.732 11:00:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=1445914 00:05:50.732 11:00:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:50.732 Waiting for target to run... 00:05:50.732 11:00:11 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.732 11:00:11 -- json_config/json_config.sh@114 -- # waitforlisten 1445914 /var/tmp/spdk_tgt.sock 00:05:50.732 11:00:11 -- common/autotest_common.sh@829 -- # '[' -z 1445914 ']' 00:05:50.732 11:00:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.732 11:00:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.732 11:00:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.732 11:00:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.732 11:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:50.732 [2024-12-13 11:00:11.109570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.732 [2024-12-13 11:00:11.109625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445914 ] 00:05:50.732 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.990 [2024-12-13 11:00:11.381276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.990 [2024-12-13 11:00:11.440246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.990 [2024-12-13 11:00:11.440341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.287 [2024-12-13 11:00:14.452739] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x917b60/0x9236c0) succeed. 00:05:54.287 [2024-12-13 11:00:14.461976] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x919d50/0x964d60) succeed. 00:05:54.287 [2024-12-13 11:00:14.508778] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:54.855 11:00:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.855 11:00:15 -- common/autotest_common.sh@862 -- # return 0 00:05:54.855 11:00:15 -- json_config/json_config.sh@115 -- # echo '' 00:05:54.855 00:05:54.855 11:00:15 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:54.855 11:00:15 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.855 INFO: Checking if target configuration is the same... 00:05:54.855 11:00:15 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.855 11:00:15 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:54.855 11:00:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.855 + '[' 2 -ne 2 ']' 00:05:54.855 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.855 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:54.855 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:54.855 +++ basename /dev/fd/62 00:05:54.855 ++ mktemp /tmp/62.XXX 00:05:54.855 + tmp_file_1=/tmp/62.9qs 00:05:54.855 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.855 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.855 + tmp_file_2=/tmp/spdk_tgt_config.json.gH5 00:05:54.855 + ret=0 00:05:54.855 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.114 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.114 + diff -u /tmp/62.9qs /tmp/spdk_tgt_config.json.gH5 00:05:55.114 + echo 'INFO: JSON config files are the same' 00:05:55.114 INFO: JSON config files are the same 00:05:55.114 + rm /tmp/62.9qs /tmp/spdk_tgt_config.json.gH5 00:05:55.114 + exit 0 00:05:55.114 11:00:15 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:55.114 11:00:15 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.114 INFO: changing configuration and checking if this can be detected... 00:05:55.114 11:00:15 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.114 11:00:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.114 11:00:15 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.114 11:00:15 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:55.114 11:00:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.114 + '[' 2 -ne 2 ']' 00:05:55.114 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.114 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:55.373 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:55.373 +++ basename /dev/fd/62 00:05:55.373 ++ mktemp /tmp/62.XXX 00:05:55.373 + tmp_file_1=/tmp/62.00g 00:05:55.373 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.373 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.373 + tmp_file_2=/tmp/spdk_tgt_config.json.EYF 00:05:55.373 + ret=0 00:05:55.373 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.632 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.632 + diff -u /tmp/62.00g /tmp/spdk_tgt_config.json.EYF 00:05:55.632 + ret=1 00:05:55.632 + echo '=== Start of file: /tmp/62.00g ===' 00:05:55.632 + cat /tmp/62.00g 00:05:55.632 + echo '=== End of file: /tmp/62.00g ===' 00:05:55.632 + echo '' 00:05:55.632 + echo '=== Start of file: /tmp/spdk_tgt_config.json.EYF ===' 00:05:55.632 + cat /tmp/spdk_tgt_config.json.EYF 00:05:55.632 + echo '=== End of file: /tmp/spdk_tgt_config.json.EYF ===' 00:05:55.632 + echo '' 00:05:55.632 + rm /tmp/62.00g /tmp/spdk_tgt_config.json.EYF 00:05:55.632 + exit 1 00:05:55.632 11:00:16 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:55.632 INFO: configuration change detected. 00:05:55.632 11:00:16 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:55.632 11:00:16 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:55.632 11:00:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.632 11:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.632 11:00:16 -- json_config/json_config.sh@360 -- # local ret=0 00:05:55.632 11:00:16 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:55.632 11:00:16 -- json_config/json_config.sh@370 -- # [[ -n 1445914 ]] 00:05:55.632 11:00:16 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:55.632 11:00:16 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.632 11:00:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.632 11:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.632 11:00:16 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:55.632 11:00:16 -- json_config/json_config.sh@246 -- # uname -s 00:05:55.632 11:00:16 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:55.632 11:00:16 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:55.632 11:00:16 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:55.632 11:00:16 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.632 11:00:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.632 11:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.632 11:00:16 -- json_config/json_config.sh@376 -- # killprocess 1445914 00:05:55.632 11:00:16 -- common/autotest_common.sh@936 -- # '[' -z 1445914 ']' 00:05:55.632 11:00:16 -- common/autotest_common.sh@940 -- # kill -0 1445914 00:05:55.632 11:00:16 -- common/autotest_common.sh@941 -- # uname 00:05:55.632 11:00:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.632 11:00:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1445914 00:05:55.632 11:00:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.632 11:00:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.632 11:00:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1445914' 00:05:55.632 killing process with pid 1445914 00:05:55.632 11:00:16 -- common/autotest_common.sh@955 -- # kill 1445914 00:05:55.632 11:00:16 -- common/autotest_common.sh@960 -- # wait 1445914 00:05:59.824 11:00:20 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.824 11:00:20 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:59.824 11:00:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.824 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 11:00:20 -- json_config/json_config.sh@381 -- # return 0 00:05:59.824 11:00:20 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:59.824 INFO: Success 00:05:59.824 11:00:20 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:59.824 11:00:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:59.824 11:00:20 -- nvmf/common.sh@116 -- # sync 00:05:59.824 11:00:20 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:59.824 11:00:20 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:59.824 11:00:20 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:59.824 11:00:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:59.824 11:00:20 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:59.824 00:05:59.825 real 0m25.388s 00:05:59.825 user 0m28.150s 00:05:59.825 sys 0m5.943s 00:05:59.825 11:00:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.825 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 ************************************ 00:05:59.825 END TEST json_config 00:05:59.825 ************************************ 00:05:59.825 11:00:20 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.825 11:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.825 11:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.825 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 ************************************ 00:05:59.825 START TEST json_config_extra_key 00:05:59.825 ************************************ 00:05:59.825 11:00:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.825 11:00:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.825 11:00:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.825 11:00:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.825 11:00:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.825 11:00:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.825 11:00:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.825 11:00:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.825 11:00:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.825 11:00:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.825 11:00:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.825 11:00:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.825 11:00:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.825 11:00:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.825 11:00:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.825 11:00:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.825 11:00:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.825 11:00:20 -- scripts/common.sh@344 -- # : 1 00:05:59.825 11:00:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.825 11:00:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.825 11:00:20 -- scripts/common.sh@364 -- # decimal 1 00:05:59.825 11:00:20 -- scripts/common.sh@352 -- # local d=1 00:05:59.825 11:00:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.825 11:00:20 -- scripts/common.sh@354 -- # echo 1 00:05:59.825 11:00:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.825 11:00:20 -- scripts/common.sh@365 -- # decimal 2 00:05:59.825 11:00:20 -- scripts/common.sh@352 -- # local d=2 00:05:59.825 11:00:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.825 11:00:20 -- scripts/common.sh@354 -- # echo 2 00:05:59.825 11:00:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.825 11:00:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.825 11:00:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.825 11:00:20 -- scripts/common.sh@367 -- # return 0 00:05:59.825 11:00:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.825 11:00:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.825 --rc genhtml_branch_coverage=1 00:05:59.825 --rc genhtml_function_coverage=1 00:05:59.825 --rc genhtml_legend=1 00:05:59.825 --rc geninfo_all_blocks=1 00:05:59.825 --rc geninfo_unexecuted_blocks=1 00:05:59.825 00:05:59.825 ' 00:05:59.825 11:00:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.825 --rc genhtml_branch_coverage=1 00:05:59.825 --rc genhtml_function_coverage=1 00:05:59.825 --rc genhtml_legend=1 00:05:59.825 --rc geninfo_all_blocks=1 00:05:59.825 --rc geninfo_unexecuted_blocks=1 00:05:59.825 00:05:59.825 ' 00:05:59.825 11:00:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.825 --rc genhtml_branch_coverage=1 00:05:59.825 --rc genhtml_function_coverage=1 00:05:59.825 --rc genhtml_legend=1 00:05:59.825 --rc geninfo_all_blocks=1 00:05:59.825 --rc geninfo_unexecuted_blocks=1 00:05:59.825 00:05:59.825 ' 00:05:59.825 11:00:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.825 --rc genhtml_branch_coverage=1 00:05:59.825 --rc genhtml_function_coverage=1 00:05:59.825 --rc genhtml_legend=1 00:05:59.825 --rc geninfo_all_blocks=1 00:05:59.825 --rc geninfo_unexecuted_blocks=1 00:05:59.825 00:05:59.825 ' 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.825 11:00:20 -- nvmf/common.sh@7 -- # uname -s 00:05:59.825 11:00:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.825 11:00:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.825 11:00:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.825 11:00:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.825 11:00:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.825 11:00:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.825 11:00:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.825 11:00:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.825 11:00:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.825 11:00:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.825 11:00:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:59.825 11:00:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:59.825 11:00:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.825 11:00:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.825 11:00:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.825 11:00:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:59.825 11:00:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.825 11:00:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.825 11:00:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.825 11:00:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.825 11:00:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.825 11:00:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.825 11:00:20 -- paths/export.sh@5 -- # export PATH 00:05:59.825 11:00:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.825 11:00:20 -- nvmf/common.sh@46 -- # : 0 00:05:59.825 11:00:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:59.825 11:00:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:59.825 11:00:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:59.825 11:00:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.825 11:00:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.825 11:00:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:59.825 11:00:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:59.825 11:00:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:59.825 INFO: launching applications... 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1447648 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:59.825 Waiting for target to run... 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1447648 /var/tmp/spdk_tgt.sock 00:05:59.825 11:00:20 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.825 11:00:20 -- common/autotest_common.sh@829 -- # '[' -z 1447648 ']' 00:05:59.825 11:00:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.825 11:00:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.825 11:00:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.825 11:00:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.825 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:59.826 [2024-12-13 11:00:20.327241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.826 [2024-12-13 11:00:20.327295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447648 ] 00:05:59.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.394 [2024-12-13 11:00:20.733827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.394 [2024-12-13 11:00:20.813342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.394 [2024-12-13 11:00:20.813445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.653 11:00:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.653 11:00:21 -- common/autotest_common.sh@862 -- # return 0 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:00.653 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:00.653 INFO: shutting down applications... 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1447648 ]] 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1447648 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1447648 00:06:00.653 11:00:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1447648 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:01.221 SPDK target shutdown done 00:06:01.221 11:00:21 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:01.221 Success 00:06:01.221 00:06:01.221 real 0m1.494s 00:06:01.221 user 0m1.130s 00:06:01.221 sys 0m0.504s 00:06:01.221 11:00:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.221 11:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.221 ************************************ 00:06:01.221 END TEST json_config_extra_key 00:06:01.221 ************************************ 00:06:01.221 11:00:21 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.221 11:00:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.221 11:00:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.221 11:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.221 ************************************ 00:06:01.221 START TEST alias_rpc 00:06:01.221 ************************************ 00:06:01.221 11:00:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.221 * Looking for test storage... 00:06:01.221 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:01.221 11:00:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.221 11:00:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.221 11:00:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.221 11:00:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.481 11:00:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.481 11:00:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.481 11:00:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.481 11:00:21 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.481 11:00:21 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.481 11:00:21 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.481 11:00:21 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.481 11:00:21 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.481 11:00:21 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.481 11:00:21 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.481 11:00:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.481 11:00:21 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.481 11:00:21 -- scripts/common.sh@344 -- # : 1 00:06:01.481 11:00:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.481 11:00:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.481 11:00:21 -- scripts/common.sh@364 -- # decimal 1 00:06:01.481 11:00:21 -- scripts/common.sh@352 -- # local d=1 00:06:01.481 11:00:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.481 11:00:21 -- scripts/common.sh@354 -- # echo 1 00:06:01.481 11:00:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.481 11:00:21 -- scripts/common.sh@365 -- # decimal 2 00:06:01.481 11:00:21 -- scripts/common.sh@352 -- # local d=2 00:06:01.481 11:00:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.481 11:00:21 -- scripts/common.sh@354 -- # echo 2 00:06:01.481 11:00:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.481 11:00:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.481 11:00:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.481 11:00:21 -- scripts/common.sh@367 -- # return 0 00:06:01.481 11:00:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.481 11:00:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 11:00:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 11:00:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 11:00:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.481 --rc genhtml_branch_coverage=1 00:06:01.481 --rc genhtml_function_coverage=1 00:06:01.481 --rc genhtml_legend=1 00:06:01.481 --rc geninfo_all_blocks=1 00:06:01.481 --rc geninfo_unexecuted_blocks=1 00:06:01.481 00:06:01.481 ' 00:06:01.481 11:00:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.481 11:00:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1448004 00:06:01.481 11:00:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1448004 00:06:01.481 11:00:21 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.481 11:00:21 -- common/autotest_common.sh@829 -- # '[' -z 1448004 ']' 00:06:01.481 11:00:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.481 11:00:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.481 11:00:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.481 11:00:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.481 11:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.481 [2024-12-13 11:00:21.852995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.481 [2024-12-13 11:00:21.853047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448004 ] 00:06:01.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.481 [2024-12-13 11:00:21.902697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.481 [2024-12-13 11:00:21.974034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.481 [2024-12-13 11:00:21.974140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.418 11:00:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.418 11:00:22 -- common/autotest_common.sh@862 -- # return 0 00:06:02.418 11:00:22 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:02.418 11:00:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1448004 00:06:02.418 11:00:22 -- common/autotest_common.sh@936 -- # '[' -z 1448004 ']' 00:06:02.418 11:00:22 -- common/autotest_common.sh@940 -- # kill -0 1448004 00:06:02.418 11:00:22 -- common/autotest_common.sh@941 -- # uname 00:06:02.418 11:00:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.418 11:00:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1448004 00:06:02.418 11:00:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.418 11:00:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.418 11:00:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1448004' 00:06:02.418 killing process with pid 1448004 00:06:02.418 11:00:22 -- common/autotest_common.sh@955 -- # kill 1448004 00:06:02.418 11:00:22 -- common/autotest_common.sh@960 -- # wait 1448004 00:06:02.678 00:06:02.678 real 0m1.564s 00:06:02.678 user 0m1.679s 00:06:02.678 sys 0m0.408s 00:06:02.678 11:00:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.678 11:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:02.678 ************************************ 00:06:02.678 END TEST alias_rpc 00:06:02.678 ************************************ 00:06:02.678 11:00:23 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:02.678 11:00:23 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.678 11:00:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.678 11:00:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.678 11:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:02.678 ************************************ 00:06:02.678 START TEST spdkcli_tcp 00:06:02.678 ************************************ 00:06:02.678 11:00:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.937 * Looking for test storage... 00:06:02.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:02.937 11:00:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.937 11:00:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.937 11:00:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:02.937 11:00:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:02.937 11:00:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:02.937 11:00:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:02.937 11:00:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:02.937 11:00:23 -- scripts/common.sh@335 -- # IFS=.-: 00:06:02.937 11:00:23 -- scripts/common.sh@335 -- # read -ra ver1 00:06:02.937 11:00:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.937 11:00:23 -- scripts/common.sh@336 -- # read -ra ver2 00:06:02.937 11:00:23 -- scripts/common.sh@337 -- # local 'op=<' 00:06:02.937 11:00:23 -- scripts/common.sh@339 -- # ver1_l=2 00:06:02.937 11:00:23 -- scripts/common.sh@340 -- # ver2_l=1 00:06:02.937 11:00:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:02.937 11:00:23 -- scripts/common.sh@343 -- # case "$op" in 00:06:02.937 11:00:23 -- scripts/common.sh@344 -- # : 1 00:06:02.937 11:00:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:02.937 11:00:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.937 11:00:23 -- scripts/common.sh@364 -- # decimal 1 00:06:02.937 11:00:23 -- scripts/common.sh@352 -- # local d=1 00:06:02.937 11:00:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.937 11:00:23 -- scripts/common.sh@354 -- # echo 1 00:06:02.937 11:00:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:02.937 11:00:23 -- scripts/common.sh@365 -- # decimal 2 00:06:02.937 11:00:23 -- scripts/common.sh@352 -- # local d=2 00:06:02.937 11:00:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.937 11:00:23 -- scripts/common.sh@354 -- # echo 2 00:06:02.937 11:00:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:02.937 11:00:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:02.937 11:00:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:02.937 11:00:23 -- scripts/common.sh@367 -- # return 0 00:06:02.937 11:00:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.937 11:00:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:02.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.937 --rc genhtml_branch_coverage=1 00:06:02.937 --rc genhtml_function_coverage=1 00:06:02.937 --rc genhtml_legend=1 00:06:02.937 --rc geninfo_all_blocks=1 00:06:02.937 --rc geninfo_unexecuted_blocks=1 00:06:02.937 00:06:02.937 ' 00:06:02.937 11:00:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:02.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.937 --rc genhtml_branch_coverage=1 00:06:02.938 --rc genhtml_function_coverage=1 00:06:02.938 --rc genhtml_legend=1 00:06:02.938 --rc geninfo_all_blocks=1 00:06:02.938 --rc geninfo_unexecuted_blocks=1 00:06:02.938 00:06:02.938 ' 00:06:02.938 11:00:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:02.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.938 --rc genhtml_branch_coverage=1 00:06:02.938 --rc genhtml_function_coverage=1 00:06:02.938 --rc genhtml_legend=1 00:06:02.938 --rc geninfo_all_blocks=1 00:06:02.938 --rc geninfo_unexecuted_blocks=1 00:06:02.938 00:06:02.938 ' 00:06:02.938 11:00:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:02.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.938 --rc genhtml_branch_coverage=1 00:06:02.938 --rc genhtml_function_coverage=1 00:06:02.938 --rc genhtml_legend=1 00:06:02.938 --rc geninfo_all_blocks=1 00:06:02.938 --rc geninfo_unexecuted_blocks=1 00:06:02.938 00:06:02.938 ' 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:02.938 11:00:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:02.938 11:00:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:02.938 11:00:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.938 11:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1448426 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@27 -- # waitforlisten 1448426 00:06:02.938 11:00:23 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:02.938 11:00:23 -- common/autotest_common.sh@829 -- # '[' -z 1448426 ']' 00:06:02.938 11:00:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.938 11:00:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.938 11:00:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.938 11:00:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.938 11:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:02.938 [2024-12-13 11:00:23.462113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.938 [2024-12-13 11:00:23.462162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448426 ] 00:06:02.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.197 [2024-12-13 11:00:23.512768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.197 [2024-12-13 11:00:23.585531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.197 [2024-12-13 11:00:23.585656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.197 [2024-12-13 11:00:23.585660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.764 11:00:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.764 11:00:24 -- common/autotest_common.sh@862 -- # return 0 00:06:03.764 11:00:24 -- spdkcli/tcp.sh@31 -- # socat_pid=1448570 00:06:03.764 11:00:24 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:03.764 11:00:24 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.024 [ 00:06:04.024 "bdev_malloc_delete", 00:06:04.024 "bdev_malloc_create", 00:06:04.024 "bdev_null_resize", 00:06:04.024 "bdev_null_delete", 00:06:04.024 "bdev_null_create", 00:06:04.024 "bdev_nvme_cuse_unregister", 00:06:04.024 "bdev_nvme_cuse_register", 00:06:04.024 "bdev_opal_new_user", 00:06:04.024 "bdev_opal_set_lock_state", 00:06:04.024 "bdev_opal_delete", 00:06:04.024 "bdev_opal_get_info", 00:06:04.024 "bdev_opal_create", 00:06:04.024 "bdev_nvme_opal_revert", 00:06:04.024 "bdev_nvme_opal_init", 00:06:04.024 "bdev_nvme_send_cmd", 00:06:04.024 "bdev_nvme_get_path_iostat", 00:06:04.024 "bdev_nvme_get_mdns_discovery_info", 00:06:04.024 "bdev_nvme_stop_mdns_discovery", 00:06:04.024 "bdev_nvme_start_mdns_discovery", 00:06:04.024 "bdev_nvme_set_multipath_policy", 00:06:04.024 "bdev_nvme_set_preferred_path", 00:06:04.024 "bdev_nvme_get_io_paths", 00:06:04.024 "bdev_nvme_remove_error_injection", 00:06:04.024 "bdev_nvme_add_error_injection", 00:06:04.024 "bdev_nvme_get_discovery_info", 00:06:04.024 "bdev_nvme_stop_discovery", 00:06:04.024 "bdev_nvme_start_discovery", 00:06:04.024 "bdev_nvme_get_controller_health_info", 00:06:04.024 "bdev_nvme_disable_controller", 00:06:04.024 "bdev_nvme_enable_controller", 00:06:04.024 "bdev_nvme_reset_controller", 00:06:04.024 "bdev_nvme_get_transport_statistics", 00:06:04.024 "bdev_nvme_apply_firmware", 00:06:04.024 "bdev_nvme_detach_controller", 00:06:04.024 "bdev_nvme_get_controllers", 00:06:04.024 "bdev_nvme_attach_controller", 00:06:04.024 "bdev_nvme_set_hotplug", 00:06:04.024 "bdev_nvme_set_options", 00:06:04.024 "bdev_passthru_delete", 00:06:04.024 "bdev_passthru_create", 00:06:04.024 "bdev_lvol_grow_lvstore", 00:06:04.024 "bdev_lvol_get_lvols", 00:06:04.024 "bdev_lvol_get_lvstores", 00:06:04.024 "bdev_lvol_delete", 00:06:04.024 "bdev_lvol_set_read_only", 00:06:04.024 "bdev_lvol_resize", 00:06:04.024 "bdev_lvol_decouple_parent", 00:06:04.024 "bdev_lvol_inflate", 00:06:04.024 "bdev_lvol_rename", 00:06:04.024 "bdev_lvol_clone_bdev", 00:06:04.024 "bdev_lvol_clone", 00:06:04.024 "bdev_lvol_snapshot", 00:06:04.024 "bdev_lvol_create", 00:06:04.024 "bdev_lvol_delete_lvstore", 00:06:04.024 "bdev_lvol_rename_lvstore", 00:06:04.024 "bdev_lvol_create_lvstore", 00:06:04.024 "bdev_raid_set_options", 00:06:04.024 "bdev_raid_remove_base_bdev", 00:06:04.024 "bdev_raid_add_base_bdev", 00:06:04.024 "bdev_raid_delete", 00:06:04.024 "bdev_raid_create", 00:06:04.024 "bdev_raid_get_bdevs", 00:06:04.024 "bdev_error_inject_error", 00:06:04.024 "bdev_error_delete", 00:06:04.024 "bdev_error_create", 00:06:04.024 "bdev_split_delete", 00:06:04.024 "bdev_split_create", 00:06:04.024 "bdev_delay_delete", 00:06:04.024 "bdev_delay_create", 00:06:04.024 "bdev_delay_update_latency", 00:06:04.024 "bdev_zone_block_delete", 00:06:04.024 "bdev_zone_block_create", 00:06:04.024 "blobfs_create", 00:06:04.024 "blobfs_detect", 00:06:04.024 "blobfs_set_cache_size", 00:06:04.024 "bdev_aio_delete", 00:06:04.024 "bdev_aio_rescan", 00:06:04.024 "bdev_aio_create", 00:06:04.024 "bdev_ftl_set_property", 00:06:04.024 "bdev_ftl_get_properties", 00:06:04.024 "bdev_ftl_get_stats", 00:06:04.024 "bdev_ftl_unmap", 00:06:04.024 "bdev_ftl_unload", 00:06:04.024 "bdev_ftl_delete", 00:06:04.024 "bdev_ftl_load", 00:06:04.024 "bdev_ftl_create", 00:06:04.024 "bdev_virtio_attach_controller", 00:06:04.024 "bdev_virtio_scsi_get_devices", 00:06:04.024 "bdev_virtio_detach_controller", 00:06:04.024 "bdev_virtio_blk_set_hotplug", 00:06:04.024 "bdev_iscsi_delete", 00:06:04.024 "bdev_iscsi_create", 00:06:04.024 "bdev_iscsi_set_options", 00:06:04.024 "accel_error_inject_error", 00:06:04.024 "ioat_scan_accel_module", 00:06:04.024 "dsa_scan_accel_module", 00:06:04.024 "iaa_scan_accel_module", 00:06:04.024 "iscsi_set_options", 00:06:04.024 "iscsi_get_auth_groups", 00:06:04.024 "iscsi_auth_group_remove_secret", 00:06:04.024 "iscsi_auth_group_add_secret", 00:06:04.024 "iscsi_delete_auth_group", 00:06:04.024 "iscsi_create_auth_group", 00:06:04.024 "iscsi_set_discovery_auth", 00:06:04.024 "iscsi_get_options", 00:06:04.024 "iscsi_target_node_request_logout", 00:06:04.024 "iscsi_target_node_set_redirect", 00:06:04.024 "iscsi_target_node_set_auth", 00:06:04.024 "iscsi_target_node_add_lun", 00:06:04.024 "iscsi_get_connections", 00:06:04.024 "iscsi_portal_group_set_auth", 00:06:04.024 "iscsi_start_portal_group", 00:06:04.024 "iscsi_delete_portal_group", 00:06:04.024 "iscsi_create_portal_group", 00:06:04.024 "iscsi_get_portal_groups", 00:06:04.024 "iscsi_delete_target_node", 00:06:04.024 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.024 "iscsi_target_node_add_pg_ig_maps", 00:06:04.024 "iscsi_create_target_node", 00:06:04.024 "iscsi_get_target_nodes", 00:06:04.024 "iscsi_delete_initiator_group", 00:06:04.024 "iscsi_initiator_group_remove_initiators", 00:06:04.024 "iscsi_initiator_group_add_initiators", 00:06:04.024 "iscsi_create_initiator_group", 00:06:04.024 "iscsi_get_initiator_groups", 00:06:04.024 "nvmf_set_crdt", 00:06:04.024 "nvmf_set_config", 00:06:04.024 "nvmf_set_max_subsystems", 00:06:04.024 "nvmf_subsystem_get_listeners", 00:06:04.024 "nvmf_subsystem_get_qpairs", 00:06:04.024 "nvmf_subsystem_get_controllers", 00:06:04.024 "nvmf_get_stats", 00:06:04.024 "nvmf_get_transports", 00:06:04.024 "nvmf_create_transport", 00:06:04.024 "nvmf_get_targets", 00:06:04.024 "nvmf_delete_target", 00:06:04.024 "nvmf_create_target", 00:06:04.024 "nvmf_subsystem_allow_any_host", 00:06:04.024 "nvmf_subsystem_remove_host", 00:06:04.024 "nvmf_subsystem_add_host", 00:06:04.024 "nvmf_subsystem_remove_ns", 00:06:04.024 "nvmf_subsystem_add_ns", 00:06:04.024 "nvmf_subsystem_listener_set_ana_state", 00:06:04.024 "nvmf_discovery_get_referrals", 00:06:04.024 "nvmf_discovery_remove_referral", 00:06:04.024 "nvmf_discovery_add_referral", 00:06:04.024 "nvmf_subsystem_remove_listener", 00:06:04.024 "nvmf_subsystem_add_listener", 00:06:04.024 "nvmf_delete_subsystem", 00:06:04.024 "nvmf_create_subsystem", 00:06:04.024 "nvmf_get_subsystems", 00:06:04.024 "env_dpdk_get_mem_stats", 00:06:04.024 "nbd_get_disks", 00:06:04.024 "nbd_stop_disk", 00:06:04.024 "nbd_start_disk", 00:06:04.024 "ublk_recover_disk", 00:06:04.024 "ublk_get_disks", 00:06:04.024 "ublk_stop_disk", 00:06:04.024 "ublk_start_disk", 00:06:04.024 "ublk_destroy_target", 00:06:04.024 "ublk_create_target", 00:06:04.024 "virtio_blk_create_transport", 00:06:04.024 "virtio_blk_get_transports", 00:06:04.024 "vhost_controller_set_coalescing", 00:06:04.024 "vhost_get_controllers", 00:06:04.024 "vhost_delete_controller", 00:06:04.024 "vhost_create_blk_controller", 00:06:04.024 "vhost_scsi_controller_remove_target", 00:06:04.024 "vhost_scsi_controller_add_target", 00:06:04.024 "vhost_start_scsi_controller", 00:06:04.024 "vhost_create_scsi_controller", 00:06:04.024 "thread_set_cpumask", 00:06:04.024 "framework_get_scheduler", 00:06:04.024 "framework_set_scheduler", 00:06:04.024 "framework_get_reactors", 00:06:04.024 "thread_get_io_channels", 00:06:04.025 "thread_get_pollers", 00:06:04.025 "thread_get_stats", 00:06:04.025 "framework_monitor_context_switch", 00:06:04.025 "spdk_kill_instance", 00:06:04.025 "log_enable_timestamps", 00:06:04.025 "log_get_flags", 00:06:04.025 "log_clear_flag", 00:06:04.025 "log_set_flag", 00:06:04.025 "log_get_level", 00:06:04.025 "log_set_level", 00:06:04.025 "log_get_print_level", 00:06:04.025 "log_set_print_level", 00:06:04.025 "framework_enable_cpumask_locks", 00:06:04.025 "framework_disable_cpumask_locks", 00:06:04.025 "framework_wait_init", 00:06:04.025 "framework_start_init", 00:06:04.025 "scsi_get_devices", 00:06:04.025 "bdev_get_histogram", 00:06:04.025 "bdev_enable_histogram", 00:06:04.025 "bdev_set_qos_limit", 00:06:04.025 "bdev_set_qd_sampling_period", 00:06:04.025 "bdev_get_bdevs", 00:06:04.025 "bdev_reset_iostat", 00:06:04.025 "bdev_get_iostat", 00:06:04.025 "bdev_examine", 00:06:04.025 "bdev_wait_for_examine", 00:06:04.025 "bdev_set_options", 00:06:04.025 "notify_get_notifications", 00:06:04.025 "notify_get_types", 00:06:04.025 "accel_get_stats", 00:06:04.025 "accel_set_options", 00:06:04.025 "accel_set_driver", 00:06:04.025 "accel_crypto_key_destroy", 00:06:04.025 "accel_crypto_keys_get", 00:06:04.025 "accel_crypto_key_create", 00:06:04.025 "accel_assign_opc", 00:06:04.025 "accel_get_module_info", 00:06:04.025 "accel_get_opc_assignments", 00:06:04.025 "vmd_rescan", 00:06:04.025 "vmd_remove_device", 00:06:04.025 "vmd_enable", 00:06:04.025 "sock_set_default_impl", 00:06:04.025 "sock_impl_set_options", 00:06:04.025 "sock_impl_get_options", 00:06:04.025 "iobuf_get_stats", 00:06:04.025 "iobuf_set_options", 00:06:04.025 "framework_get_pci_devices", 00:06:04.025 "framework_get_config", 00:06:04.025 "framework_get_subsystems", 00:06:04.025 "trace_get_info", 00:06:04.025 "trace_get_tpoint_group_mask", 00:06:04.025 "trace_disable_tpoint_group", 00:06:04.025 "trace_enable_tpoint_group", 00:06:04.025 "trace_clear_tpoint_mask", 00:06:04.025 "trace_set_tpoint_mask", 00:06:04.025 "spdk_get_version", 00:06:04.025 "rpc_get_methods" 00:06:04.025 ] 00:06:04.025 11:00:24 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.025 11:00:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.025 11:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.025 11:00:24 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.025 11:00:24 -- spdkcli/tcp.sh@38 -- # killprocess 1448426 00:06:04.025 11:00:24 -- common/autotest_common.sh@936 -- # '[' -z 1448426 ']' 00:06:04.025 11:00:24 -- common/autotest_common.sh@940 -- # kill -0 1448426 00:06:04.025 11:00:24 -- common/autotest_common.sh@941 -- # uname 00:06:04.025 11:00:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.025 11:00:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1448426 00:06:04.025 11:00:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.025 11:00:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.025 11:00:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1448426' 00:06:04.025 killing process with pid 1448426 00:06:04.025 11:00:24 -- common/autotest_common.sh@955 -- # kill 1448426 00:06:04.025 11:00:24 -- common/autotest_common.sh@960 -- # wait 1448426 00:06:04.284 00:06:04.284 real 0m1.599s 00:06:04.284 user 0m2.904s 00:06:04.284 sys 0m0.446s 00:06:04.284 11:00:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.284 11:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.284 ************************************ 00:06:04.284 END TEST spdkcli_tcp 00:06:04.284 ************************************ 00:06:04.543 11:00:24 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.543 11:00:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.543 11:00:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.543 11:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.543 ************************************ 00:06:04.543 START TEST dpdk_mem_utility 00:06:04.543 ************************************ 00:06:04.543 11:00:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.543 * Looking for test storage... 00:06:04.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:04.543 11:00:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.543 11:00:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.543 11:00:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.543 11:00:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.543 11:00:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.543 11:00:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.543 11:00:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.543 11:00:25 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.543 11:00:25 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.543 11:00:25 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.543 11:00:25 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.543 11:00:25 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.543 11:00:25 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.543 11:00:25 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.543 11:00:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.543 11:00:25 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.543 11:00:25 -- scripts/common.sh@344 -- # : 1 00:06:04.543 11:00:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.543 11:00:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.543 11:00:25 -- scripts/common.sh@364 -- # decimal 1 00:06:04.543 11:00:25 -- scripts/common.sh@352 -- # local d=1 00:06:04.543 11:00:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.543 11:00:25 -- scripts/common.sh@354 -- # echo 1 00:06:04.543 11:00:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.543 11:00:25 -- scripts/common.sh@365 -- # decimal 2 00:06:04.543 11:00:25 -- scripts/common.sh@352 -- # local d=2 00:06:04.543 11:00:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.543 11:00:25 -- scripts/common.sh@354 -- # echo 2 00:06:04.543 11:00:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.543 11:00:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.543 11:00:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.543 11:00:25 -- scripts/common.sh@367 -- # return 0 00:06:04.543 11:00:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.543 11:00:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.543 --rc genhtml_branch_coverage=1 00:06:04.543 --rc genhtml_function_coverage=1 00:06:04.543 --rc genhtml_legend=1 00:06:04.543 --rc geninfo_all_blocks=1 00:06:04.543 --rc geninfo_unexecuted_blocks=1 00:06:04.543 00:06:04.543 ' 00:06:04.543 11:00:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.543 --rc genhtml_branch_coverage=1 00:06:04.543 --rc genhtml_function_coverage=1 00:06:04.543 --rc genhtml_legend=1 00:06:04.543 --rc geninfo_all_blocks=1 00:06:04.543 --rc geninfo_unexecuted_blocks=1 00:06:04.543 00:06:04.543 ' 00:06:04.543 11:00:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.543 --rc genhtml_branch_coverage=1 00:06:04.543 --rc genhtml_function_coverage=1 00:06:04.543 --rc genhtml_legend=1 00:06:04.543 --rc geninfo_all_blocks=1 00:06:04.543 --rc geninfo_unexecuted_blocks=1 00:06:04.543 00:06:04.543 ' 00:06:04.543 11:00:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.543 --rc genhtml_branch_coverage=1 00:06:04.543 --rc genhtml_function_coverage=1 00:06:04.543 --rc genhtml_legend=1 00:06:04.543 --rc geninfo_all_blocks=1 00:06:04.543 --rc geninfo_unexecuted_blocks=1 00:06:04.543 00:06:04.543 ' 00:06:04.543 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:04.543 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1448869 00:06:04.543 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1448869 00:06:04.543 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.543 11:00:25 -- common/autotest_common.sh@829 -- # '[' -z 1448869 ']' 00:06:04.543 11:00:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.543 11:00:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.543 11:00:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.543 11:00:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.543 11:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.543 [2024-12-13 11:00:25.072243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.543 [2024-12-13 11:00:25.072300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448869 ] 00:06:04.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.802 [2024-12-13 11:00:25.123054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.802 [2024-12-13 11:00:25.194041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.802 [2024-12-13 11:00:25.194149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.369 11:00:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.369 11:00:25 -- common/autotest_common.sh@862 -- # return 0 00:06:05.369 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.369 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.369 11:00:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.369 11:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.369 { 00:06:05.369 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.369 } 00:06:05.369 11:00:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.369 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.369 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:05.369 1 heaps totaling size 814.000000 MiB 00:06:05.369 size: 814.000000 MiB heap id: 0 00:06:05.369 end heaps---------- 00:06:05.369 8 mempools totaling size 598.116089 MiB 00:06:05.369 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:05.369 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:05.369 size: 84.521057 MiB name: bdev_io_1448869 00:06:05.369 size: 51.011292 MiB name: evtpool_1448869 00:06:05.369 size: 50.003479 MiB name: msgpool_1448869 00:06:05.369 size: 21.763794 MiB name: PDU_Pool 00:06:05.369 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:05.369 size: 0.026123 MiB name: Session_Pool 00:06:05.369 end mempools------- 00:06:05.369 6 memzones totaling size 4.142822 MiB 00:06:05.369 size: 1.000366 MiB name: RG_ring_0_1448869 00:06:05.369 size: 1.000366 MiB name: RG_ring_1_1448869 00:06:05.369 size: 1.000366 MiB name: RG_ring_4_1448869 00:06:05.369 size: 1.000366 MiB name: RG_ring_5_1448869 00:06:05.369 size: 0.125366 MiB name: RG_ring_2_1448869 00:06:05.369 size: 0.015991 MiB name: RG_ring_3_1448869 00:06:05.369 end memzones------- 00:06:05.369 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.629 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:05.629 list of free elements. size: 12.519348 MiB 00:06:05.629 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:05.629 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:05.629 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:05.629 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:05.629 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:05.629 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:05.629 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:05.629 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:05.629 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:05.629 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:05.629 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:05.629 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:05.629 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:05.629 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:05.629 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:05.629 list of standard malloc elements. size: 199.218079 MiB 00:06:05.629 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:05.629 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:05.629 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:05.629 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:05.629 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:05.629 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:05.629 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:05.629 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:05.629 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:05.629 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:05.629 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:05.629 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:05.629 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:05.629 list of memzone associated elements. size: 602.262573 MiB 00:06:05.629 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:05.629 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.629 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:05.629 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.629 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:05.629 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1448869_0 00:06:05.629 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:05.629 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1448869_0 00:06:05.629 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:05.629 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1448869_0 00:06:05.629 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:05.629 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.629 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:05.629 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.629 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:05.629 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1448869 00:06:05.629 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:05.629 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1448869 00:06:05.629 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:05.629 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1448869 00:06:05.629 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:05.629 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.629 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:05.629 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.629 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:05.629 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.629 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:05.629 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.629 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:05.629 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1448869 00:06:05.629 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:05.629 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1448869 00:06:05.629 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:05.629 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1448869 00:06:05.629 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:05.629 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1448869 00:06:05.629 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:05.629 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1448869 00:06:05.629 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:05.629 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.629 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:05.629 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.629 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:05.629 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.629 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:05.629 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1448869 00:06:05.629 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:05.629 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.629 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:05.629 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.629 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:05.629 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1448869 00:06:05.629 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:05.629 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.629 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:05.629 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1448869 00:06:05.629 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:05.629 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1448869 00:06:05.629 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:05.630 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.630 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.630 11:00:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1448869 00:06:05.630 11:00:25 -- common/autotest_common.sh@936 -- # '[' -z 1448869 ']' 00:06:05.630 11:00:25 -- common/autotest_common.sh@940 -- # kill -0 1448869 00:06:05.630 11:00:25 -- common/autotest_common.sh@941 -- # uname 00:06:05.630 11:00:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.630 11:00:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1448869 00:06:05.630 11:00:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.630 11:00:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.630 11:00:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1448869' 00:06:05.630 killing process with pid 1448869 00:06:05.630 11:00:26 -- common/autotest_common.sh@955 -- # kill 1448869 00:06:05.630 11:00:26 -- common/autotest_common.sh@960 -- # wait 1448869 00:06:05.889 00:06:05.889 real 0m1.450s 00:06:05.889 user 0m1.499s 00:06:05.889 sys 0m0.404s 00:06:05.889 11:00:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.889 11:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:05.889 ************************************ 00:06:05.889 END TEST dpdk_mem_utility 00:06:05.889 ************************************ 00:06:05.889 11:00:26 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:05.889 11:00:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.889 11:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.889 11:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:05.889 ************************************ 00:06:05.889 START TEST event 00:06:05.889 ************************************ 00:06:05.889 11:00:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:05.889 * Looking for test storage... 00:06:05.889 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:05.889 11:00:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:05.889 11:00:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:05.889 11:00:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:06.149 11:00:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:06.149 11:00:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:06.149 11:00:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:06.149 11:00:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:06.149 11:00:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:06.149 11:00:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:06.149 11:00:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.149 11:00:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:06.149 11:00:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:06.149 11:00:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:06.149 11:00:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:06.149 11:00:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:06.149 11:00:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:06.149 11:00:26 -- scripts/common.sh@344 -- # : 1 00:06:06.149 11:00:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:06.149 11:00:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.149 11:00:26 -- scripts/common.sh@364 -- # decimal 1 00:06:06.149 11:00:26 -- scripts/common.sh@352 -- # local d=1 00:06:06.149 11:00:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.149 11:00:26 -- scripts/common.sh@354 -- # echo 1 00:06:06.149 11:00:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:06.149 11:00:26 -- scripts/common.sh@365 -- # decimal 2 00:06:06.149 11:00:26 -- scripts/common.sh@352 -- # local d=2 00:06:06.149 11:00:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.149 11:00:26 -- scripts/common.sh@354 -- # echo 2 00:06:06.149 11:00:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:06.149 11:00:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:06.149 11:00:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:06.149 11:00:26 -- scripts/common.sh@367 -- # return 0 00:06:06.149 11:00:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.149 11:00:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.149 --rc genhtml_branch_coverage=1 00:06:06.149 --rc genhtml_function_coverage=1 00:06:06.149 --rc genhtml_legend=1 00:06:06.149 --rc geninfo_all_blocks=1 00:06:06.149 --rc geninfo_unexecuted_blocks=1 00:06:06.149 00:06:06.149 ' 00:06:06.149 11:00:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.149 --rc genhtml_branch_coverage=1 00:06:06.149 --rc genhtml_function_coverage=1 00:06:06.149 --rc genhtml_legend=1 00:06:06.149 --rc geninfo_all_blocks=1 00:06:06.149 --rc geninfo_unexecuted_blocks=1 00:06:06.149 00:06:06.149 ' 00:06:06.149 11:00:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.149 --rc genhtml_branch_coverage=1 00:06:06.149 --rc genhtml_function_coverage=1 00:06:06.149 --rc genhtml_legend=1 00:06:06.149 --rc geninfo_all_blocks=1 00:06:06.149 --rc geninfo_unexecuted_blocks=1 00:06:06.149 00:06:06.149 ' 00:06:06.149 11:00:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:06.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.149 --rc genhtml_branch_coverage=1 00:06:06.149 --rc genhtml_function_coverage=1 00:06:06.149 --rc genhtml_legend=1 00:06:06.149 --rc geninfo_all_blocks=1 00:06:06.149 --rc geninfo_unexecuted_blocks=1 00:06:06.149 00:06:06.149 ' 00:06:06.149 11:00:26 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.149 11:00:26 -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.149 11:00:26 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.149 11:00:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:06.149 11:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.149 11:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:06.149 ************************************ 00:06:06.149 START TEST event_perf 00:06:06.149 ************************************ 00:06:06.149 11:00:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.149 Running I/O for 1 seconds...[2024-12-13 11:00:26.546821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.149 [2024-12-13 11:00:26.546899] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449220 ] 00:06:06.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.149 [2024-12-13 11:00:26.600964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.149 [2024-12-13 11:00:26.667598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.149 [2024-12-13 11:00:26.667682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.149 [2024-12-13 11:00:26.667769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.149 [2024-12-13 11:00:26.667771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.527 Running I/O for 1 seconds... 00:06:07.527 lcore 0: 218575 00:06:07.527 lcore 1: 218572 00:06:07.527 lcore 2: 218573 00:06:07.527 lcore 3: 218574 00:06:07.527 done. 00:06:07.527 00:06:07.527 real 0m1.222s 00:06:07.527 user 0m4.144s 00:06:07.527 sys 0m0.076s 00:06:07.527 11:00:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.527 11:00:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.527 ************************************ 00:06:07.527 END TEST event_perf 00:06:07.527 ************************************ 00:06:07.527 11:00:27 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.527 11:00:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:07.527 11:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.527 11:00:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.527 ************************************ 00:06:07.527 START TEST event_reactor 00:06:07.527 ************************************ 00:06:07.527 11:00:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.527 [2024-12-13 11:00:27.807391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.527 [2024-12-13 11:00:27.807462] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449397 ] 00:06:07.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.527 [2024-12-13 11:00:27.862697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.527 [2024-12-13 11:00:27.926744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.464 test_start 00:06:08.465 oneshot 00:06:08.465 tick 100 00:06:08.465 tick 100 00:06:08.465 tick 250 00:06:08.465 tick 100 00:06:08.465 tick 100 00:06:08.465 tick 100 00:06:08.465 tick 250 00:06:08.465 tick 500 00:06:08.465 tick 100 00:06:08.465 tick 100 00:06:08.465 tick 250 00:06:08.465 tick 100 00:06:08.465 tick 100 00:06:08.465 test_end 00:06:08.465 00:06:08.465 real 0m1.218s 00:06:08.465 user 0m1.143s 00:06:08.465 sys 0m0.071s 00:06:08.465 11:00:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.465 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.465 ************************************ 00:06:08.465 END TEST event_reactor 00:06:08.465 ************************************ 00:06:08.724 11:00:29 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.724 11:00:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:08.724 11:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.724 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.724 ************************************ 00:06:08.724 START TEST event_reactor_perf 00:06:08.724 ************************************ 00:06:08.724 11:00:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.724 [2024-12-13 11:00:29.050146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.724 [2024-12-13 11:00:29.050190] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449563 ] 00:06:08.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.724 [2024-12-13 11:00:29.099021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.724 [2024-12-13 11:00:29.163952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.103 test_start 00:06:10.103 test_end 00:06:10.103 Performance: 542109 events per second 00:06:10.103 00:06:10.103 real 0m1.210s 00:06:10.103 user 0m1.139s 00:06:10.103 sys 0m0.067s 00:06:10.103 11:00:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.103 11:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.103 ************************************ 00:06:10.103 END TEST event_reactor_perf 00:06:10.103 ************************************ 00:06:10.103 11:00:30 -- event/event.sh@49 -- # uname -s 00:06:10.103 11:00:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.103 11:00:30 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.103 11:00:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.103 11:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.103 11:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.103 ************************************ 00:06:10.103 START TEST event_scheduler 00:06:10.103 ************************************ 00:06:10.103 11:00:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.103 * Looking for test storage... 00:06:10.103 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:10.103 11:00:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:10.103 11:00:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:10.103 11:00:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:10.103 11:00:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:10.103 11:00:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:10.103 11:00:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:10.103 11:00:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:10.103 11:00:30 -- scripts/common.sh@335 -- # IFS=.-: 00:06:10.103 11:00:30 -- scripts/common.sh@335 -- # read -ra ver1 00:06:10.103 11:00:30 -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.103 11:00:30 -- scripts/common.sh@336 -- # read -ra ver2 00:06:10.103 11:00:30 -- scripts/common.sh@337 -- # local 'op=<' 00:06:10.103 11:00:30 -- scripts/common.sh@339 -- # ver1_l=2 00:06:10.103 11:00:30 -- scripts/common.sh@340 -- # ver2_l=1 00:06:10.103 11:00:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:10.103 11:00:30 -- scripts/common.sh@343 -- # case "$op" in 00:06:10.103 11:00:30 -- scripts/common.sh@344 -- # : 1 00:06:10.103 11:00:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:10.103 11:00:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.103 11:00:30 -- scripts/common.sh@364 -- # decimal 1 00:06:10.103 11:00:30 -- scripts/common.sh@352 -- # local d=1 00:06:10.103 11:00:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.103 11:00:30 -- scripts/common.sh@354 -- # echo 1 00:06:10.103 11:00:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:10.103 11:00:30 -- scripts/common.sh@365 -- # decimal 2 00:06:10.103 11:00:30 -- scripts/common.sh@352 -- # local d=2 00:06:10.103 11:00:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.103 11:00:30 -- scripts/common.sh@354 -- # echo 2 00:06:10.103 11:00:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:10.103 11:00:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:10.103 11:00:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:10.103 11:00:30 -- scripts/common.sh@367 -- # return 0 00:06:10.103 11:00:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.103 11:00:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:10.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.103 --rc genhtml_branch_coverage=1 00:06:10.103 --rc genhtml_function_coverage=1 00:06:10.103 --rc genhtml_legend=1 00:06:10.103 --rc geninfo_all_blocks=1 00:06:10.103 --rc geninfo_unexecuted_blocks=1 00:06:10.103 00:06:10.103 ' 00:06:10.103 11:00:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:10.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.103 --rc genhtml_branch_coverage=1 00:06:10.103 --rc genhtml_function_coverage=1 00:06:10.103 --rc genhtml_legend=1 00:06:10.103 --rc geninfo_all_blocks=1 00:06:10.103 --rc geninfo_unexecuted_blocks=1 00:06:10.103 00:06:10.103 ' 00:06:10.103 11:00:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:10.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.103 --rc genhtml_branch_coverage=1 00:06:10.103 --rc genhtml_function_coverage=1 00:06:10.103 --rc genhtml_legend=1 00:06:10.103 --rc geninfo_all_blocks=1 00:06:10.103 --rc geninfo_unexecuted_blocks=1 00:06:10.103 00:06:10.103 ' 00:06:10.103 11:00:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:10.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.103 --rc genhtml_branch_coverage=1 00:06:10.103 --rc genhtml_function_coverage=1 00:06:10.103 --rc genhtml_legend=1 00:06:10.103 --rc geninfo_all_blocks=1 00:06:10.103 --rc geninfo_unexecuted_blocks=1 00:06:10.103 00:06:10.103 ' 00:06:10.103 11:00:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.103 11:00:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1449892 00:06:10.103 11:00:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.103 11:00:30 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.103 11:00:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 1449892 00:06:10.103 11:00:30 -- common/autotest_common.sh@829 -- # '[' -z 1449892 ']' 00:06:10.103 11:00:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.103 11:00:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.103 11:00:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.103 11:00:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.103 11:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.103 [2024-12-13 11:00:30.499532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.103 [2024-12-13 11:00:30.499580] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449892 ] 00:06:10.103 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.103 [2024-12-13 11:00:30.548795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.103 [2024-12-13 11:00:30.616804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.103 [2024-12-13 11:00:30.616821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.103 [2024-12-13 11:00:30.616906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.104 [2024-12-13 11:00:30.616908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.041 11:00:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.041 11:00:31 -- common/autotest_common.sh@862 -- # return 0 00:06:11.041 11:00:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.041 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.041 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.041 POWER: Env isn't set yet! 00:06:11.041 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:11.041 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.041 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.041 POWER: Attempting to initialise PSTAT power management... 00:06:11.041 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:11.041 POWER: Initialized successfully for lcore 0 power management 00:06:11.041 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:11.041 POWER: Initialized successfully for lcore 1 power management 00:06:11.041 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:11.041 POWER: Initialized successfully for lcore 2 power management 00:06:11.041 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:11.041 POWER: Initialized successfully for lcore 3 power management 00:06:11.041 [2024-12-13 11:00:31.349398] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.041 [2024-12-13 11:00:31.349410] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.041 [2024-12-13 11:00:31.349417] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.041 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.041 11:00:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.041 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.041 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.041 [2024-12-13 11:00:31.416387] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.042 11:00:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.042 11:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 ************************************ 00:06:11.042 START TEST scheduler_create_thread 00:06:11.042 ************************************ 00:06:11.042 11:00:31 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 2 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 3 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 4 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 5 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 6 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 7 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 8 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 9 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 10 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.042 11:00:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.042 11:00:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.042 11:00:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.042 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:11.979 11:00:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.979 11:00:32 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.979 11:00:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.979 11:00:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.357 11:00:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.357 11:00:33 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.357 11:00:33 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.357 11:00:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.357 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:06:14.294 11:00:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.294 00:06:14.294 real 0m3.375s 00:06:14.294 user 0m0.022s 00:06:14.294 sys 0m0.006s 00:06:14.294 11:00:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.294 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:14.294 ************************************ 00:06:14.294 END TEST scheduler_create_thread 00:06:14.294 ************************************ 00:06:14.294 11:00:34 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.294 11:00:34 -- scheduler/scheduler.sh@46 -- # killprocess 1449892 00:06:14.294 11:00:34 -- common/autotest_common.sh@936 -- # '[' -z 1449892 ']' 00:06:14.294 11:00:34 -- common/autotest_common.sh@940 -- # kill -0 1449892 00:06:14.294 11:00:34 -- common/autotest_common.sh@941 -- # uname 00:06:14.294 11:00:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.294 11:00:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1449892 00:06:14.553 11:00:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:14.553 11:00:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:14.553 11:00:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1449892' 00:06:14.553 killing process with pid 1449892 00:06:14.553 11:00:34 -- common/autotest_common.sh@955 -- # kill 1449892 00:06:14.553 11:00:34 -- common/autotest_common.sh@960 -- # wait 1449892 00:06:14.812 [2024-12-13 11:00:35.179965] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.812 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:14.812 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:14.812 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:14.812 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:14.812 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:14.812 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:14.812 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:14.812 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:15.070 00:06:15.070 real 0m5.129s 00:06:15.070 user 0m10.523s 00:06:15.070 sys 0m0.361s 00:06:15.070 11:00:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.070 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:15.070 ************************************ 00:06:15.070 END TEST event_scheduler 00:06:15.070 ************************************ 00:06:15.070 11:00:35 -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.070 11:00:35 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.070 11:00:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.070 11:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.070 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:15.071 ************************************ 00:06:15.071 START TEST app_repeat 00:06:15.071 ************************************ 00:06:15.071 11:00:35 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:15.071 11:00:35 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.071 11:00:35 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.071 11:00:35 -- event/event.sh@13 -- # local nbd_list 00:06:15.071 11:00:35 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.071 11:00:35 -- event/event.sh@14 -- # local bdev_list 00:06:15.071 11:00:35 -- event/event.sh@15 -- # local repeat_times=4 00:06:15.071 11:00:35 -- event/event.sh@17 -- # modprobe nbd 00:06:15.071 11:00:35 -- event/event.sh@19 -- # repeat_pid=1450969 00:06:15.071 11:00:35 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.071 11:00:35 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.071 11:00:35 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1450969' 00:06:15.071 Process app_repeat pid: 1450969 00:06:15.071 11:00:35 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.071 11:00:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.071 spdk_app_start Round 0 00:06:15.071 11:00:35 -- event/event.sh@25 -- # waitforlisten 1450969 /var/tmp/spdk-nbd.sock 00:06:15.071 11:00:35 -- common/autotest_common.sh@829 -- # '[' -z 1450969 ']' 00:06:15.071 11:00:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.071 11:00:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.071 11:00:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.071 11:00:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.071 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:15.071 [2024-12-13 11:00:35.498252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.071 [2024-12-13 11:00:35.498317] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450969 ] 00:06:15.071 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.071 [2024-12-13 11:00:35.549926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.071 [2024-12-13 11:00:35.621365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.071 [2024-12-13 11:00:35.621368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.008 11:00:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.008 11:00:36 -- common/autotest_common.sh@862 -- # return 0 00:06:16.008 11:00:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.008 Malloc0 00:06:16.008 11:00:36 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.267 Malloc1 00:06:16.267 11:00:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.267 11:00:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.267 /dev/nbd0 00:06:16.527 11:00:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.527 11:00:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.527 11:00:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:16.527 11:00:36 -- common/autotest_common.sh@867 -- # local i 00:06:16.527 11:00:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.527 11:00:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.527 11:00:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:16.527 11:00:36 -- common/autotest_common.sh@871 -- # break 00:06:16.527 11:00:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.527 11:00:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.527 11:00:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.527 1+0 records in 00:06:16.527 1+0 records out 00:06:16.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225938 s, 18.1 MB/s 00:06:16.527 11:00:36 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:16.527 11:00:36 -- common/autotest_common.sh@884 -- # size=4096 00:06:16.527 11:00:36 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:16.527 11:00:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.527 11:00:36 -- common/autotest_common.sh@887 -- # return 0 00:06:16.527 11:00:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.527 11:00:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.527 11:00:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.527 /dev/nbd1 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.527 11:00:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:16.527 11:00:37 -- common/autotest_common.sh@867 -- # local i 00:06:16.527 11:00:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.527 11:00:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.527 11:00:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:16.527 11:00:37 -- common/autotest_common.sh@871 -- # break 00:06:16.527 11:00:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.527 11:00:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.527 11:00:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.527 1+0 records in 00:06:16.527 1+0 records out 00:06:16.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189947 s, 21.6 MB/s 00:06:16.527 11:00:37 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:16.527 11:00:37 -- common/autotest_common.sh@884 -- # size=4096 00:06:16.527 11:00:37 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:16.527 11:00:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.527 11:00:37 -- common/autotest_common.sh@887 -- # return 0 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.527 11:00:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.786 { 00:06:16.786 "nbd_device": "/dev/nbd0", 00:06:16.786 "bdev_name": "Malloc0" 00:06:16.786 }, 00:06:16.786 { 00:06:16.786 "nbd_device": "/dev/nbd1", 00:06:16.786 "bdev_name": "Malloc1" 00:06:16.786 } 00:06:16.786 ]' 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.786 { 00:06:16.786 "nbd_device": "/dev/nbd0", 00:06:16.786 "bdev_name": "Malloc0" 00:06:16.786 }, 00:06:16.786 { 00:06:16.786 "nbd_device": "/dev/nbd1", 00:06:16.786 "bdev_name": "Malloc1" 00:06:16.786 } 00:06:16.786 ]' 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.786 /dev/nbd1' 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.786 /dev/nbd1' 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.786 11:00:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.787 256+0 records in 00:06:16.787 256+0 records out 00:06:16.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107259 s, 97.8 MB/s 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.787 256+0 records in 00:06:16.787 256+0 records out 00:06:16.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013593 s, 77.1 MB/s 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.787 256+0 records in 00:06:16.787 256+0 records out 00:06:16.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140833 s, 74.5 MB/s 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@51 -- # local i 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.787 11:00:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@41 -- # break 00:06:17.045 11:00:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.046 11:00:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.046 11:00:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@41 -- # break 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.305 11:00:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@65 -- # true 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.564 11:00:37 -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.564 11:00:37 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.564 11:00:38 -- event/event.sh@35 -- # sleep 3 00:06:17.823 [2024-12-13 11:00:38.292450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.823 [2024-12-13 11:00:38.350774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.823 [2024-12-13 11:00:38.350777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.823 [2024-12-13 11:00:38.391308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.823 [2024-12-13 11:00:38.391344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.113 11:00:41 -- event/event.sh@23 -- # for i in {0..2} 00:06:21.113 11:00:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.113 spdk_app_start Round 1 00:06:21.113 11:00:41 -- event/event.sh@25 -- # waitforlisten 1450969 /var/tmp/spdk-nbd.sock 00:06:21.113 11:00:41 -- common/autotest_common.sh@829 -- # '[' -z 1450969 ']' 00:06:21.113 11:00:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.113 11:00:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.113 11:00:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.113 11:00:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.113 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.113 11:00:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.113 11:00:41 -- common/autotest_common.sh@862 -- # return 0 00:06:21.113 11:00:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.113 Malloc0 00:06:21.113 11:00:41 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.113 Malloc1 00:06:21.113 11:00:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.113 11:00:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.372 /dev/nbd0 00:06:21.372 11:00:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.372 11:00:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.372 11:00:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:21.372 11:00:41 -- common/autotest_common.sh@867 -- # local i 00:06:21.372 11:00:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.372 11:00:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.372 11:00:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:21.372 11:00:41 -- common/autotest_common.sh@871 -- # break 00:06:21.372 11:00:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.372 11:00:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.372 11:00:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.372 1+0 records in 00:06:21.372 1+0 records out 00:06:21.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195103 s, 21.0 MB/s 00:06:21.372 11:00:41 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.372 11:00:41 -- common/autotest_common.sh@884 -- # size=4096 00:06:21.372 11:00:41 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.372 11:00:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.372 11:00:41 -- common/autotest_common.sh@887 -- # return 0 00:06:21.372 11:00:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.372 11:00:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.373 11:00:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.632 /dev/nbd1 00:06:21.632 11:00:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.632 11:00:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.632 11:00:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:21.632 11:00:41 -- common/autotest_common.sh@867 -- # local i 00:06:21.632 11:00:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.632 11:00:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.632 11:00:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:21.632 11:00:41 -- common/autotest_common.sh@871 -- # break 00:06:21.632 11:00:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.632 11:00:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.632 11:00:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.632 1+0 records in 00:06:21.632 1+0 records out 00:06:21.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000127979 s, 32.0 MB/s 00:06:21.632 11:00:41 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.632 11:00:41 -- common/autotest_common.sh@884 -- # size=4096 00:06:21.632 11:00:41 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:21.632 11:00:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.632 11:00:42 -- common/autotest_common.sh@887 -- # return 0 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.632 { 00:06:21.632 "nbd_device": "/dev/nbd0", 00:06:21.632 "bdev_name": "Malloc0" 00:06:21.632 }, 00:06:21.632 { 00:06:21.632 "nbd_device": "/dev/nbd1", 00:06:21.632 "bdev_name": "Malloc1" 00:06:21.632 } 00:06:21.632 ]' 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.632 { 00:06:21.632 "nbd_device": "/dev/nbd0", 00:06:21.632 "bdev_name": "Malloc0" 00:06:21.632 }, 00:06:21.632 { 00:06:21.632 "nbd_device": "/dev/nbd1", 00:06:21.632 "bdev_name": "Malloc1" 00:06:21.632 } 00:06:21.632 ]' 00:06:21.632 11:00:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.891 /dev/nbd1' 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.891 /dev/nbd1' 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.891 256+0 records in 00:06:21.891 256+0 records out 00:06:21.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036673 s, 286 MB/s 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.891 256+0 records in 00:06:21.891 256+0 records out 00:06:21.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128898 s, 81.3 MB/s 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.891 256+0 records in 00:06:21.891 256+0 records out 00:06:21.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136883 s, 76.6 MB/s 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.891 11:00:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@51 -- # local i 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@41 -- # break 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.892 11:00:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@41 -- # break 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.151 11:00:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@65 -- # true 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.410 11:00:42 -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.410 11:00:42 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.669 11:00:43 -- event/event.sh@35 -- # sleep 3 00:06:22.669 [2024-12-13 11:00:43.224858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.929 [2024-12-13 11:00:43.284636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.929 [2024-12-13 11:00:43.284639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.929 [2024-12-13 11:00:43.325027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.929 [2024-12-13 11:00:43.325067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.553 11:00:46 -- event/event.sh@23 -- # for i in {0..2} 00:06:25.553 11:00:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:25.553 spdk_app_start Round 2 00:06:25.553 11:00:46 -- event/event.sh@25 -- # waitforlisten 1450969 /var/tmp/spdk-nbd.sock 00:06:25.553 11:00:46 -- common/autotest_common.sh@829 -- # '[' -z 1450969 ']' 00:06:25.553 11:00:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.553 11:00:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.553 11:00:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.553 11:00:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.553 11:00:46 -- common/autotest_common.sh@10 -- # set +x 00:06:25.812 11:00:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.812 11:00:46 -- common/autotest_common.sh@862 -- # return 0 00:06:25.812 11:00:46 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.812 Malloc0 00:06:25.812 11:00:46 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.071 Malloc1 00:06:26.071 11:00:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.071 11:00:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.071 11:00:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.071 11:00:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.071 11:00:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@12 -- # local i 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.072 11:00:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.330 /dev/nbd0 00:06:26.330 11:00:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.330 11:00:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.331 11:00:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:26.331 11:00:46 -- common/autotest_common.sh@867 -- # local i 00:06:26.331 11:00:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.331 11:00:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.331 11:00:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:26.331 11:00:46 -- common/autotest_common.sh@871 -- # break 00:06:26.331 11:00:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.331 11:00:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.331 11:00:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.331 1+0 records in 00:06:26.331 1+0 records out 00:06:26.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236021 s, 17.4 MB/s 00:06:26.331 11:00:46 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.331 11:00:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:26.331 11:00:46 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.331 11:00:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.331 11:00:46 -- common/autotest_common.sh@887 -- # return 0 00:06:26.331 11:00:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.331 11:00:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.331 11:00:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.589 /dev/nbd1 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.590 11:00:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:26.590 11:00:46 -- common/autotest_common.sh@867 -- # local i 00:06:26.590 11:00:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.590 11:00:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.590 11:00:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:26.590 11:00:46 -- common/autotest_common.sh@871 -- # break 00:06:26.590 11:00:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.590 11:00:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.590 11:00:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.590 1+0 records in 00:06:26.590 1+0 records out 00:06:26.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184551 s, 22.2 MB/s 00:06:26.590 11:00:46 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.590 11:00:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:26.590 11:00:46 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.590 11:00:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.590 11:00:46 -- common/autotest_common.sh@887 -- # return 0 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.590 11:00:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.590 { 00:06:26.590 "nbd_device": "/dev/nbd0", 00:06:26.590 "bdev_name": "Malloc0" 00:06:26.590 }, 00:06:26.590 { 00:06:26.590 "nbd_device": "/dev/nbd1", 00:06:26.590 "bdev_name": "Malloc1" 00:06:26.590 } 00:06:26.590 ]' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.590 { 00:06:26.590 "nbd_device": "/dev/nbd0", 00:06:26.590 "bdev_name": "Malloc0" 00:06:26.590 }, 00:06:26.590 { 00:06:26.590 "nbd_device": "/dev/nbd1", 00:06:26.590 "bdev_name": "Malloc1" 00:06:26.590 } 00:06:26.590 ]' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.590 /dev/nbd1' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.590 /dev/nbd1' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.590 11:00:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.854 256+0 records in 00:06:26.854 256+0 records out 00:06:26.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106162 s, 98.8 MB/s 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.854 256+0 records in 00:06:26.854 256+0 records out 00:06:26.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133718 s, 78.4 MB/s 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.854 256+0 records in 00:06:26.854 256+0 records out 00:06:26.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137662 s, 76.2 MB/s 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@51 -- # local i 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@41 -- # break 00:06:26.854 11:00:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.855 11:00:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.855 11:00:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@41 -- # break 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.116 11:00:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@65 -- # true 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.375 11:00:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.375 11:00:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.634 11:00:47 -- event/event.sh@35 -- # sleep 3 00:06:27.634 [2024-12-13 11:00:48.170333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.893 [2024-12-13 11:00:48.229924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.893 [2024-12-13 11:00:48.229927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.893 [2024-12-13 11:00:48.270328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.893 [2024-12-13 11:00:48.270367] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.428 11:00:50 -- event/event.sh@38 -- # waitforlisten 1450969 /var/tmp/spdk-nbd.sock 00:06:30.428 11:00:50 -- common/autotest_common.sh@829 -- # '[' -z 1450969 ']' 00:06:30.428 11:00:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.428 11:00:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.428 11:00:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.428 11:00:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.428 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.687 11:00:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.687 11:00:51 -- common/autotest_common.sh@862 -- # return 0 00:06:30.687 11:00:51 -- event/event.sh@39 -- # killprocess 1450969 00:06:30.687 11:00:51 -- common/autotest_common.sh@936 -- # '[' -z 1450969 ']' 00:06:30.687 11:00:51 -- common/autotest_common.sh@940 -- # kill -0 1450969 00:06:30.687 11:00:51 -- common/autotest_common.sh@941 -- # uname 00:06:30.687 11:00:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.687 11:00:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1450969 00:06:30.687 11:00:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.687 11:00:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.687 11:00:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1450969' 00:06:30.687 killing process with pid 1450969 00:06:30.687 11:00:51 -- common/autotest_common.sh@955 -- # kill 1450969 00:06:30.687 11:00:51 -- common/autotest_common.sh@960 -- # wait 1450969 00:06:30.946 spdk_app_start is called in Round 0. 00:06:30.946 Shutdown signal received, stop current app iteration 00:06:30.946 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:30.946 spdk_app_start is called in Round 1. 00:06:30.946 Shutdown signal received, stop current app iteration 00:06:30.946 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:30.946 spdk_app_start is called in Round 2. 00:06:30.946 Shutdown signal received, stop current app iteration 00:06:30.946 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:30.946 spdk_app_start is called in Round 3. 00:06:30.946 Shutdown signal received, stop current app iteration 00:06:30.946 11:00:51 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:30.946 11:00:51 -- event/event.sh@42 -- # return 0 00:06:30.946 00:06:30.946 real 0m15.907s 00:06:30.946 user 0m34.232s 00:06:30.946 sys 0m2.275s 00:06:30.946 11:00:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.946 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:30.946 ************************************ 00:06:30.946 END TEST app_repeat 00:06:30.946 ************************************ 00:06:30.946 11:00:51 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:30.946 11:00:51 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:30.946 11:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.946 11:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.946 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:30.946 ************************************ 00:06:30.946 START TEST cpu_locks 00:06:30.946 ************************************ 00:06:30.946 11:00:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:30.946 * Looking for test storage... 00:06:30.946 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:30.946 11:00:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:30.946 11:00:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:30.946 11:00:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:31.205 11:00:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:31.205 11:00:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:31.205 11:00:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:31.205 11:00:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:31.205 11:00:51 -- scripts/common.sh@335 -- # IFS=.-: 00:06:31.205 11:00:51 -- scripts/common.sh@335 -- # read -ra ver1 00:06:31.205 11:00:51 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.205 11:00:51 -- scripts/common.sh@336 -- # read -ra ver2 00:06:31.205 11:00:51 -- scripts/common.sh@337 -- # local 'op=<' 00:06:31.205 11:00:51 -- scripts/common.sh@339 -- # ver1_l=2 00:06:31.205 11:00:51 -- scripts/common.sh@340 -- # ver2_l=1 00:06:31.206 11:00:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:31.206 11:00:51 -- scripts/common.sh@343 -- # case "$op" in 00:06:31.206 11:00:51 -- scripts/common.sh@344 -- # : 1 00:06:31.206 11:00:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:31.206 11:00:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.206 11:00:51 -- scripts/common.sh@364 -- # decimal 1 00:06:31.206 11:00:51 -- scripts/common.sh@352 -- # local d=1 00:06:31.206 11:00:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.206 11:00:51 -- scripts/common.sh@354 -- # echo 1 00:06:31.206 11:00:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:31.206 11:00:51 -- scripts/common.sh@365 -- # decimal 2 00:06:31.206 11:00:51 -- scripts/common.sh@352 -- # local d=2 00:06:31.206 11:00:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.206 11:00:51 -- scripts/common.sh@354 -- # echo 2 00:06:31.206 11:00:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:31.206 11:00:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:31.206 11:00:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:31.206 11:00:51 -- scripts/common.sh@367 -- # return 0 00:06:31.206 11:00:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.206 11:00:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:31.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.206 --rc genhtml_branch_coverage=1 00:06:31.206 --rc genhtml_function_coverage=1 00:06:31.206 --rc genhtml_legend=1 00:06:31.206 --rc geninfo_all_blocks=1 00:06:31.206 --rc geninfo_unexecuted_blocks=1 00:06:31.206 00:06:31.206 ' 00:06:31.206 11:00:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:31.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.206 --rc genhtml_branch_coverage=1 00:06:31.206 --rc genhtml_function_coverage=1 00:06:31.206 --rc genhtml_legend=1 00:06:31.206 --rc geninfo_all_blocks=1 00:06:31.206 --rc geninfo_unexecuted_blocks=1 00:06:31.206 00:06:31.206 ' 00:06:31.206 11:00:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:31.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.206 --rc genhtml_branch_coverage=1 00:06:31.206 --rc genhtml_function_coverage=1 00:06:31.206 --rc genhtml_legend=1 00:06:31.206 --rc geninfo_all_blocks=1 00:06:31.206 --rc geninfo_unexecuted_blocks=1 00:06:31.206 00:06:31.206 ' 00:06:31.206 11:00:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:31.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.206 --rc genhtml_branch_coverage=1 00:06:31.206 --rc genhtml_function_coverage=1 00:06:31.206 --rc genhtml_legend=1 00:06:31.206 --rc geninfo_all_blocks=1 00:06:31.206 --rc geninfo_unexecuted_blocks=1 00:06:31.206 00:06:31.206 ' 00:06:31.206 11:00:51 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.206 11:00:51 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.206 11:00:51 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.206 11:00:51 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.206 11:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.206 11:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.206 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 ************************************ 00:06:31.206 START TEST default_locks 00:06:31.206 ************************************ 00:06:31.206 11:00:51 -- common/autotest_common.sh@1114 -- # default_locks 00:06:31.206 11:00:51 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1454140 00:06:31.206 11:00:51 -- event/cpu_locks.sh@47 -- # waitforlisten 1454140 00:06:31.206 11:00:51 -- common/autotest_common.sh@829 -- # '[' -z 1454140 ']' 00:06:31.206 11:00:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.206 11:00:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.206 11:00:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.206 11:00:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.206 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:31.206 11:00:51 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.206 [2024-12-13 11:00:51.622000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.206 [2024-12-13 11:00:51.622051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454140 ] 00:06:31.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.206 [2024-12-13 11:00:51.672101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.206 [2024-12-13 11:00:51.743724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.206 [2024-12-13 11:00:51.743834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.143 11:00:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.143 11:00:52 -- common/autotest_common.sh@862 -- # return 0 00:06:32.143 11:00:52 -- event/cpu_locks.sh@49 -- # locks_exist 1454140 00:06:32.143 11:00:52 -- event/cpu_locks.sh@22 -- # lslocks -p 1454140 00:06:32.143 11:00:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.403 lslocks: write error 00:06:32.403 11:00:52 -- event/cpu_locks.sh@50 -- # killprocess 1454140 00:06:32.403 11:00:52 -- common/autotest_common.sh@936 -- # '[' -z 1454140 ']' 00:06:32.403 11:00:52 -- common/autotest_common.sh@940 -- # kill -0 1454140 00:06:32.403 11:00:52 -- common/autotest_common.sh@941 -- # uname 00:06:32.403 11:00:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.403 11:00:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1454140 00:06:32.403 11:00:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.403 11:00:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.403 11:00:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1454140' 00:06:32.403 killing process with pid 1454140 00:06:32.403 11:00:52 -- common/autotest_common.sh@955 -- # kill 1454140 00:06:32.403 11:00:52 -- common/autotest_common.sh@960 -- # wait 1454140 00:06:32.663 11:00:53 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1454140 00:06:32.663 11:00:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:32.663 11:00:53 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1454140 00:06:32.663 11:00:53 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.663 11:00:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.663 11:00:53 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.663 11:00:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.663 11:00:53 -- common/autotest_common.sh@653 -- # waitforlisten 1454140 00:06:32.663 11:00:53 -- common/autotest_common.sh@829 -- # '[' -z 1454140 ']' 00:06:32.663 11:00:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.663 11:00:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.663 11:00:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.663 11:00:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.663 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.663 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1454140) - No such process 00:06:32.663 ERROR: process (pid: 1454140) is no longer running 00:06:32.663 11:00:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.663 11:00:53 -- common/autotest_common.sh@862 -- # return 1 00:06:32.663 11:00:53 -- common/autotest_common.sh@653 -- # es=1 00:06:32.663 11:00:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.663 11:00:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.663 11:00:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.663 11:00:53 -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.663 11:00:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.663 11:00:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.663 11:00:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.663 00:06:32.663 real 0m1.562s 00:06:32.663 user 0m1.636s 00:06:32.663 sys 0m0.498s 00:06:32.663 11:00:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.663 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.663 ************************************ 00:06:32.663 END TEST default_locks 00:06:32.663 ************************************ 00:06:32.663 11:00:53 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:32.663 11:00:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.663 11:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.663 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.663 ************************************ 00:06:32.663 START TEST default_locks_via_rpc 00:06:32.663 ************************************ 00:06:32.663 11:00:53 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:32.663 11:00:53 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1454439 00:06:32.663 11:00:53 -- event/cpu_locks.sh@63 -- # waitforlisten 1454439 00:06:32.663 11:00:53 -- common/autotest_common.sh@829 -- # '[' -z 1454439 ']' 00:06:32.663 11:00:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.663 11:00:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.663 11:00:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.663 11:00:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.663 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.663 11:00:53 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.663 [2024-12-13 11:00:53.214412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.663 [2024-12-13 11:00:53.214463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454439 ] 00:06:32.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.922 [2024-12-13 11:00:53.264300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.922 [2024-12-13 11:00:53.335581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.922 [2024-12-13 11:00:53.335685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.491 11:00:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.491 11:00:53 -- common/autotest_common.sh@862 -- # return 0 00:06:33.491 11:00:53 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.491 11:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.491 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.491 11:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.491 11:00:53 -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.491 11:00:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.491 11:00:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.491 11:00:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.491 11:00:53 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.491 11:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.491 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.491 11:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.491 11:00:54 -- event/cpu_locks.sh@71 -- # locks_exist 1454439 00:06:33.491 11:00:54 -- event/cpu_locks.sh@22 -- # lslocks -p 1454439 00:06:33.491 11:00:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.750 11:00:54 -- event/cpu_locks.sh@73 -- # killprocess 1454439 00:06:33.750 11:00:54 -- common/autotest_common.sh@936 -- # '[' -z 1454439 ']' 00:06:33.750 11:00:54 -- common/autotest_common.sh@940 -- # kill -0 1454439 00:06:33.750 11:00:54 -- common/autotest_common.sh@941 -- # uname 00:06:33.750 11:00:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.750 11:00:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1454439 00:06:33.750 11:00:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.750 11:00:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.750 11:00:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1454439' 00:06:33.750 killing process with pid 1454439 00:06:33.750 11:00:54 -- common/autotest_common.sh@955 -- # kill 1454439 00:06:33.750 11:00:54 -- common/autotest_common.sh@960 -- # wait 1454439 00:06:34.319 00:06:34.319 real 0m1.429s 00:06:34.319 user 0m1.496s 00:06:34.319 sys 0m0.442s 00:06:34.319 11:00:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.319 11:00:54 -- common/autotest_common.sh@10 -- # set +x 00:06:34.319 ************************************ 00:06:34.319 END TEST default_locks_via_rpc 00:06:34.319 ************************************ 00:06:34.319 11:00:54 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.319 11:00:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.319 11:00:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.319 11:00:54 -- common/autotest_common.sh@10 -- # set +x 00:06:34.319 ************************************ 00:06:34.319 START TEST non_locking_app_on_locked_coremask 00:06:34.319 ************************************ 00:06:34.319 11:00:54 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:34.319 11:00:54 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1454737 00:06:34.319 11:00:54 -- event/cpu_locks.sh@81 -- # waitforlisten 1454737 /var/tmp/spdk.sock 00:06:34.319 11:00:54 -- common/autotest_common.sh@829 -- # '[' -z 1454737 ']' 00:06:34.319 11:00:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.319 11:00:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.319 11:00:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.319 11:00:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.319 11:00:54 -- common/autotest_common.sh@10 -- # set +x 00:06:34.319 11:00:54 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.319 [2024-12-13 11:00:54.680072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.319 [2024-12-13 11:00:54.680119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454737 ] 00:06:34.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.319 [2024-12-13 11:00:54.729024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.319 [2024-12-13 11:00:54.800436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.319 [2024-12-13 11:00:54.800541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.887 11:00:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.887 11:00:55 -- common/autotest_common.sh@862 -- # return 0 00:06:34.887 11:00:55 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1454998 00:06:34.887 11:00:55 -- event/cpu_locks.sh@85 -- # waitforlisten 1454998 /var/tmp/spdk2.sock 00:06:34.887 11:00:55 -- common/autotest_common.sh@829 -- # '[' -z 1454998 ']' 00:06:34.887 11:00:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.887 11:00:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.887 11:00:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.887 11:00:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.887 11:00:55 -- common/autotest_common.sh@10 -- # set +x 00:06:34.887 11:00:55 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.146 [2024-12-13 11:00:55.500756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.146 [2024-12-13 11:00:55.500802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454998 ] 00:06:35.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.146 [2024-12-13 11:00:55.569475] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.146 [2024-12-13 11:00:55.569496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.146 [2024-12-13 11:00:55.698189] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.146 [2024-12-13 11:00:55.698307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.084 11:00:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.084 11:00:56 -- common/autotest_common.sh@862 -- # return 0 00:06:36.084 11:00:56 -- event/cpu_locks.sh@87 -- # locks_exist 1454737 00:06:36.084 11:00:56 -- event/cpu_locks.sh@22 -- # lslocks -p 1454737 00:06:36.084 11:00:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.084 lslocks: write error 00:06:36.084 11:00:56 -- event/cpu_locks.sh@89 -- # killprocess 1454737 00:06:36.084 11:00:56 -- common/autotest_common.sh@936 -- # '[' -z 1454737 ']' 00:06:36.084 11:00:56 -- common/autotest_common.sh@940 -- # kill -0 1454737 00:06:36.084 11:00:56 -- common/autotest_common.sh@941 -- # uname 00:06:36.084 11:00:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.084 11:00:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1454737 00:06:36.343 11:00:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.343 11:00:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.343 11:00:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1454737' 00:06:36.343 killing process with pid 1454737 00:06:36.343 11:00:56 -- common/autotest_common.sh@955 -- # kill 1454737 00:06:36.343 11:00:56 -- common/autotest_common.sh@960 -- # wait 1454737 00:06:36.911 11:00:57 -- event/cpu_locks.sh@90 -- # killprocess 1454998 00:06:36.911 11:00:57 -- common/autotest_common.sh@936 -- # '[' -z 1454998 ']' 00:06:36.911 11:00:57 -- common/autotest_common.sh@940 -- # kill -0 1454998 00:06:36.911 11:00:57 -- common/autotest_common.sh@941 -- # uname 00:06:36.911 11:00:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.911 11:00:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1454998 00:06:36.911 11:00:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.911 11:00:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.911 11:00:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1454998' 00:06:36.911 killing process with pid 1454998 00:06:36.911 11:00:57 -- common/autotest_common.sh@955 -- # kill 1454998 00:06:36.911 11:00:57 -- common/autotest_common.sh@960 -- # wait 1454998 00:06:37.171 00:06:37.171 real 0m3.047s 00:06:37.171 user 0m3.292s 00:06:37.171 sys 0m0.808s 00:06:37.171 11:00:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.171 11:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:37.171 ************************************ 00:06:37.171 END TEST non_locking_app_on_locked_coremask 00:06:37.171 ************************************ 00:06:37.171 11:00:57 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:37.171 11:00:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.171 11:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.171 11:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:37.171 ************************************ 00:06:37.171 START TEST locking_app_on_unlocked_coremask 00:06:37.171 ************************************ 00:06:37.171 11:00:57 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:37.171 11:00:57 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1455309 00:06:37.171 11:00:57 -- event/cpu_locks.sh@99 -- # waitforlisten 1455309 /var/tmp/spdk.sock 00:06:37.171 11:00:57 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:37.171 11:00:57 -- common/autotest_common.sh@829 -- # '[' -z 1455309 ']' 00:06:37.171 11:00:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.171 11:00:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.171 11:00:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.171 11:00:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.171 11:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:37.430 [2024-12-13 11:00:57.769801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.430 [2024-12-13 11:00:57.769846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455309 ] 00:06:37.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.430 [2024-12-13 11:00:57.821073] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.430 [2024-12-13 11:00:57.821104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.430 [2024-12-13 11:00:57.881014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.430 [2024-12-13 11:00:57.881129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.999 11:00:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.999 11:00:58 -- common/autotest_common.sh@862 -- # return 0 00:06:37.999 11:00:58 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1455575 00:06:37.999 11:00:58 -- event/cpu_locks.sh@103 -- # waitforlisten 1455575 /var/tmp/spdk2.sock 00:06:37.999 11:00:58 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.999 11:00:58 -- common/autotest_common.sh@829 -- # '[' -z 1455575 ']' 00:06:37.999 11:00:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.999 11:00:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.999 11:00:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.999 11:00:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.999 11:00:58 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 [2024-12-13 11:00:58.587171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.258 [2024-12-13 11:00:58.587214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455575 ] 00:06:38.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.258 [2024-12-13 11:00:58.654990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.258 [2024-12-13 11:00:58.788547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.258 [2024-12-13 11:00:58.788661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.827 11:00:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.827 11:00:59 -- common/autotest_common.sh@862 -- # return 0 00:06:38.827 11:00:59 -- event/cpu_locks.sh@105 -- # locks_exist 1455575 00:06:38.827 11:00:59 -- event/cpu_locks.sh@22 -- # lslocks -p 1455575 00:06:38.827 11:00:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.764 lslocks: write error 00:06:39.764 11:00:59 -- event/cpu_locks.sh@107 -- # killprocess 1455309 00:06:39.764 11:00:59 -- common/autotest_common.sh@936 -- # '[' -z 1455309 ']' 00:06:39.764 11:00:59 -- common/autotest_common.sh@940 -- # kill -0 1455309 00:06:39.764 11:00:59 -- common/autotest_common.sh@941 -- # uname 00:06:39.764 11:00:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.764 11:00:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1455309 00:06:39.764 11:01:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.764 11:01:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.764 11:01:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1455309' 00:06:39.764 killing process with pid 1455309 00:06:39.764 11:01:00 -- common/autotest_common.sh@955 -- # kill 1455309 00:06:39.764 11:01:00 -- common/autotest_common.sh@960 -- # wait 1455309 00:06:40.333 11:01:00 -- event/cpu_locks.sh@108 -- # killprocess 1455575 00:06:40.333 11:01:00 -- common/autotest_common.sh@936 -- # '[' -z 1455575 ']' 00:06:40.333 11:01:00 -- common/autotest_common.sh@940 -- # kill -0 1455575 00:06:40.333 11:01:00 -- common/autotest_common.sh@941 -- # uname 00:06:40.333 11:01:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.333 11:01:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1455575 00:06:40.333 11:01:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.333 11:01:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.333 11:01:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1455575' 00:06:40.333 killing process with pid 1455575 00:06:40.333 11:01:00 -- common/autotest_common.sh@955 -- # kill 1455575 00:06:40.333 11:01:00 -- common/autotest_common.sh@960 -- # wait 1455575 00:06:40.593 00:06:40.593 real 0m3.338s 00:06:40.593 user 0m3.551s 00:06:40.593 sys 0m0.935s 00:06:40.593 11:01:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.593 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:40.593 ************************************ 00:06:40.593 END TEST locking_app_on_unlocked_coremask 00:06:40.593 ************************************ 00:06:40.593 11:01:01 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.593 11:01:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.593 11:01:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.593 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:40.593 ************************************ 00:06:40.593 START TEST locking_app_on_locked_coremask 00:06:40.593 ************************************ 00:06:40.593 11:01:01 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:40.593 11:01:01 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1456115 00:06:40.593 11:01:01 -- event/cpu_locks.sh@116 -- # waitforlisten 1456115 /var/tmp/spdk.sock 00:06:40.593 11:01:01 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.593 11:01:01 -- common/autotest_common.sh@829 -- # '[' -z 1456115 ']' 00:06:40.593 11:01:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.593 11:01:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.593 11:01:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.593 11:01:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.593 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:40.593 [2024-12-13 11:01:01.148987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.593 [2024-12-13 11:01:01.149037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456115 ] 00:06:40.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.852 [2024-12-13 11:01:01.199755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.852 [2024-12-13 11:01:01.261977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.852 [2024-12-13 11:01:01.262093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.421 11:01:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.421 11:01:01 -- common/autotest_common.sh@862 -- # return 0 00:06:41.421 11:01:01 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.421 11:01:01 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1456151 00:06:41.421 11:01:01 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1456151 /var/tmp/spdk2.sock 00:06:41.421 11:01:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.421 11:01:01 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1456151 /var/tmp/spdk2.sock 00:06:41.421 11:01:01 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.421 11:01:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.421 11:01:01 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.421 11:01:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.421 11:01:01 -- common/autotest_common.sh@653 -- # waitforlisten 1456151 /var/tmp/spdk2.sock 00:06:41.421 11:01:01 -- common/autotest_common.sh@829 -- # '[' -z 1456151 ']' 00:06:41.421 11:01:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.421 11:01:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.421 11:01:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.421 11:01:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.421 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:41.421 [2024-12-13 11:01:01.956677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.421 [2024-12-13 11:01:01.956719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456151 ] 00:06:41.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.682 [2024-12-13 11:01:02.028154] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1456115 has claimed it. 00:06:41.682 [2024-12-13 11:01:02.028190] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.251 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1456151) - No such process 00:06:42.251 ERROR: process (pid: 1456151) is no longer running 00:06:42.251 11:01:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.251 11:01:02 -- common/autotest_common.sh@862 -- # return 1 00:06:42.251 11:01:02 -- common/autotest_common.sh@653 -- # es=1 00:06:42.251 11:01:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.251 11:01:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.251 11:01:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.251 11:01:02 -- event/cpu_locks.sh@122 -- # locks_exist 1456115 00:06:42.251 11:01:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1456115 00:06:42.251 11:01:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.510 lslocks: write error 00:06:42.510 11:01:03 -- event/cpu_locks.sh@124 -- # killprocess 1456115 00:06:42.510 11:01:03 -- common/autotest_common.sh@936 -- # '[' -z 1456115 ']' 00:06:42.510 11:01:03 -- common/autotest_common.sh@940 -- # kill -0 1456115 00:06:42.510 11:01:03 -- common/autotest_common.sh@941 -- # uname 00:06:42.510 11:01:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.510 11:01:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1456115 00:06:42.510 11:01:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.510 11:01:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.510 11:01:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1456115' 00:06:42.510 killing process with pid 1456115 00:06:42.510 11:01:03 -- common/autotest_common.sh@955 -- # kill 1456115 00:06:42.510 11:01:03 -- common/autotest_common.sh@960 -- # wait 1456115 00:06:43.080 00:06:43.080 real 0m2.275s 00:06:43.080 user 0m2.501s 00:06:43.080 sys 0m0.615s 00:06:43.080 11:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.080 11:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:43.080 ************************************ 00:06:43.080 END TEST locking_app_on_locked_coremask 00:06:43.080 ************************************ 00:06:43.080 11:01:03 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:43.080 11:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.080 11:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.080 11:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:43.080 ************************************ 00:06:43.080 START TEST locking_overlapped_coremask 00:06:43.080 ************************************ 00:06:43.080 11:01:03 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:43.080 11:01:03 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1456447 00:06:43.080 11:01:03 -- event/cpu_locks.sh@133 -- # waitforlisten 1456447 /var/tmp/spdk.sock 00:06:43.080 11:01:03 -- common/autotest_common.sh@829 -- # '[' -z 1456447 ']' 00:06:43.080 11:01:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.080 11:01:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.080 11:01:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.080 11:01:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.080 11:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:43.080 11:01:03 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:43.080 [2024-12-13 11:01:03.457868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.080 [2024-12-13 11:01:03.457919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456447 ] 00:06:43.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.080 [2024-12-13 11:01:03.508320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.080 [2024-12-13 11:01:03.581021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.080 [2024-12-13 11:01:03.581146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.080 [2024-12-13 11:01:03.581161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.080 [2024-12-13 11:01:03.581163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.018 11:01:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.018 11:01:04 -- common/autotest_common.sh@862 -- # return 0 00:06:44.018 11:01:04 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1456707 00:06:44.018 11:01:04 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1456707 /var/tmp/spdk2.sock 00:06:44.018 11:01:04 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.018 11:01:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:44.018 11:01:04 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1456707 /var/tmp/spdk2.sock 00:06:44.018 11:01:04 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.018 11:01:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.018 11:01:04 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.018 11:01:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.018 11:01:04 -- common/autotest_common.sh@653 -- # waitforlisten 1456707 /var/tmp/spdk2.sock 00:06:44.018 11:01:04 -- common/autotest_common.sh@829 -- # '[' -z 1456707 ']' 00:06:44.018 11:01:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.018 11:01:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.018 11:01:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.018 11:01:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.018 11:01:04 -- common/autotest_common.sh@10 -- # set +x 00:06:44.018 [2024-12-13 11:01:04.290093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.018 [2024-12-13 11:01:04.290136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456707 ] 00:06:44.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.018 [2024-12-13 11:01:04.364209] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1456447 has claimed it. 00:06:44.018 [2024-12-13 11:01:04.364244] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.587 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1456707) - No such process 00:06:44.587 ERROR: process (pid: 1456707) is no longer running 00:06:44.587 11:01:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.587 11:01:04 -- common/autotest_common.sh@862 -- # return 1 00:06:44.587 11:01:04 -- common/autotest_common.sh@653 -- # es=1 00:06:44.587 11:01:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.587 11:01:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.587 11:01:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.587 11:01:04 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.587 11:01:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.588 11:01:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.588 11:01:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.588 11:01:04 -- event/cpu_locks.sh@141 -- # killprocess 1456447 00:06:44.588 11:01:04 -- common/autotest_common.sh@936 -- # '[' -z 1456447 ']' 00:06:44.588 11:01:04 -- common/autotest_common.sh@940 -- # kill -0 1456447 00:06:44.588 11:01:04 -- common/autotest_common.sh@941 -- # uname 00:06:44.589 11:01:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.589 11:01:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1456447 00:06:44.589 11:01:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.589 11:01:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.589 11:01:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1456447' 00:06:44.589 killing process with pid 1456447 00:06:44.589 11:01:04 -- common/autotest_common.sh@955 -- # kill 1456447 00:06:44.589 11:01:04 -- common/autotest_common.sh@960 -- # wait 1456447 00:06:44.850 00:06:44.850 real 0m1.880s 00:06:44.850 user 0m5.293s 00:06:44.850 sys 0m0.395s 00:06:44.850 11:01:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.850 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.850 ************************************ 00:06:44.850 END TEST locking_overlapped_coremask 00:06:44.850 ************************************ 00:06:44.850 11:01:05 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.850 11:01:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.850 11:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.850 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.850 ************************************ 00:06:44.850 START TEST locking_overlapped_coremask_via_rpc 00:06:44.850 ************************************ 00:06:44.850 11:01:05 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:44.850 11:01:05 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1456909 00:06:44.850 11:01:05 -- event/cpu_locks.sh@149 -- # waitforlisten 1456909 /var/tmp/spdk.sock 00:06:44.850 11:01:05 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.850 11:01:05 -- common/autotest_common.sh@829 -- # '[' -z 1456909 ']' 00:06:44.850 11:01:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.850 11:01:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.850 11:01:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.850 11:01:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.850 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:44.850 [2024-12-13 11:01:05.380951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.850 [2024-12-13 11:01:05.381000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456909 ] 00:06:44.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.109 [2024-12-13 11:01:05.431729] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.109 [2024-12-13 11:01:05.431754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.109 [2024-12-13 11:01:05.505160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.109 [2024-12-13 11:01:05.505308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.109 [2024-12-13 11:01:05.505348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.109 [2024-12-13 11:01:05.505350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.677 11:01:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.677 11:01:06 -- common/autotest_common.sh@862 -- # return 0 00:06:45.677 11:01:06 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1457020 00:06:45.677 11:01:06 -- event/cpu_locks.sh@153 -- # waitforlisten 1457020 /var/tmp/spdk2.sock 00:06:45.677 11:01:06 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.677 11:01:06 -- common/autotest_common.sh@829 -- # '[' -z 1457020 ']' 00:06:45.677 11:01:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.677 11:01:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.677 11:01:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.677 11:01:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.677 11:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.677 [2024-12-13 11:01:06.216711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.677 [2024-12-13 11:01:06.216755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457020 ] 00:06:45.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.936 [2024-12-13 11:01:06.288200] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.936 [2024-12-13 11:01:06.288225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.936 [2024-12-13 11:01:06.418912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.936 [2024-12-13 11:01:06.419097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.936 [2024-12-13 11:01:06.426381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.936 [2024-12-13 11:01:06.426382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.504 11:01:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.504 11:01:06 -- common/autotest_common.sh@862 -- # return 0 00:06:46.504 11:01:06 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.504 11:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.504 11:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:46.504 11:01:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.504 11:01:07 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.504 11:01:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:46.504 11:01:07 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.504 11:01:07 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:46.504 11:01:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.504 11:01:07 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:46.504 11:01:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.504 11:01:07 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.504 11:01:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.504 11:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:46.504 [2024-12-13 11:01:07.013335] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1456909 has claimed it. 00:06:46.504 request: 00:06:46.504 { 00:06:46.504 "method": "framework_enable_cpumask_locks", 00:06:46.504 "req_id": 1 00:06:46.504 } 00:06:46.504 Got JSON-RPC error response 00:06:46.504 response: 00:06:46.504 { 00:06:46.504 "code": -32603, 00:06:46.504 "message": "Failed to claim CPU core: 2" 00:06:46.504 } 00:06:46.504 11:01:07 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:46.504 11:01:07 -- common/autotest_common.sh@653 -- # es=1 00:06:46.504 11:01:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.504 11:01:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.504 11:01:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.504 11:01:07 -- event/cpu_locks.sh@158 -- # waitforlisten 1456909 /var/tmp/spdk.sock 00:06:46.504 11:01:07 -- common/autotest_common.sh@829 -- # '[' -z 1456909 ']' 00:06:46.504 11:01:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.504 11:01:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.504 11:01:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.504 11:01:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.504 11:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:46.763 11:01:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.763 11:01:07 -- common/autotest_common.sh@862 -- # return 0 00:06:46.763 11:01:07 -- event/cpu_locks.sh@159 -- # waitforlisten 1457020 /var/tmp/spdk2.sock 00:06:46.763 11:01:07 -- common/autotest_common.sh@829 -- # '[' -z 1457020 ']' 00:06:46.763 11:01:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.763 11:01:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.763 11:01:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.763 11:01:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.763 11:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 11:01:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.022 11:01:07 -- common/autotest_common.sh@862 -- # return 0 00:06:47.022 11:01:07 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.022 11:01:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.022 11:01:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.022 11:01:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.022 00:06:47.022 real 0m2.045s 00:06:47.022 user 0m0.839s 00:06:47.022 sys 0m0.143s 00:06:47.022 11:01:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.022 11:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:47.022 ************************************ 00:06:47.022 END TEST locking_overlapped_coremask_via_rpc 00:06:47.022 ************************************ 00:06:47.022 11:01:07 -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.022 11:01:07 -- event/cpu_locks.sh@15 -- # [[ -z 1456909 ]] 00:06:47.022 11:01:07 -- event/cpu_locks.sh@15 -- # killprocess 1456909 00:06:47.022 11:01:07 -- common/autotest_common.sh@936 -- # '[' -z 1456909 ']' 00:06:47.022 11:01:07 -- common/autotest_common.sh@940 -- # kill -0 1456909 00:06:47.022 11:01:07 -- common/autotest_common.sh@941 -- # uname 00:06:47.022 11:01:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.022 11:01:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1456909 00:06:47.022 11:01:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.022 11:01:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.022 11:01:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1456909' 00:06:47.022 killing process with pid 1456909 00:06:47.022 11:01:07 -- common/autotest_common.sh@955 -- # kill 1456909 00:06:47.022 11:01:07 -- common/autotest_common.sh@960 -- # wait 1456909 00:06:47.281 11:01:07 -- event/cpu_locks.sh@16 -- # [[ -z 1457020 ]] 00:06:47.281 11:01:07 -- event/cpu_locks.sh@16 -- # killprocess 1457020 00:06:47.281 11:01:07 -- common/autotest_common.sh@936 -- # '[' -z 1457020 ']' 00:06:47.281 11:01:07 -- common/autotest_common.sh@940 -- # kill -0 1457020 00:06:47.281 11:01:07 -- common/autotest_common.sh@941 -- # uname 00:06:47.281 11:01:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.281 11:01:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1457020 00:06:47.540 11:01:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:47.540 11:01:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:47.540 11:01:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1457020' 00:06:47.540 killing process with pid 1457020 00:06:47.540 11:01:07 -- common/autotest_common.sh@955 -- # kill 1457020 00:06:47.540 11:01:07 -- common/autotest_common.sh@960 -- # wait 1457020 00:06:47.800 11:01:08 -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.800 11:01:08 -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.800 11:01:08 -- event/cpu_locks.sh@15 -- # [[ -z 1456909 ]] 00:06:47.800 11:01:08 -- event/cpu_locks.sh@15 -- # killprocess 1456909 00:06:47.800 11:01:08 -- common/autotest_common.sh@936 -- # '[' -z 1456909 ']' 00:06:47.800 11:01:08 -- common/autotest_common.sh@940 -- # kill -0 1456909 00:06:47.800 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1456909) - No such process 00:06:47.800 11:01:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1456909 is not found' 00:06:47.800 Process with pid 1456909 is not found 00:06:47.800 11:01:08 -- event/cpu_locks.sh@16 -- # [[ -z 1457020 ]] 00:06:47.800 11:01:08 -- event/cpu_locks.sh@16 -- # killprocess 1457020 00:06:47.800 11:01:08 -- common/autotest_common.sh@936 -- # '[' -z 1457020 ']' 00:06:47.800 11:01:08 -- common/autotest_common.sh@940 -- # kill -0 1457020 00:06:47.800 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1457020) - No such process 00:06:47.800 11:01:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1457020 is not found' 00:06:47.800 Process with pid 1457020 is not found 00:06:47.800 11:01:08 -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.800 00:06:47.800 real 0m16.788s 00:06:47.800 user 0m29.104s 00:06:47.800 sys 0m4.653s 00:06:47.800 11:01:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.800 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 ************************************ 00:06:47.800 END TEST cpu_locks 00:06:47.800 ************************************ 00:06:47.800 00:06:47.800 real 0m41.877s 00:06:47.800 user 1m20.454s 00:06:47.800 sys 0m7.785s 00:06:47.800 11:01:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.800 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 ************************************ 00:06:47.800 END TEST event 00:06:47.800 ************************************ 00:06:47.800 11:01:08 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:47.800 11:01:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.800 11:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.800 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 ************************************ 00:06:47.800 START TEST thread 00:06:47.800 ************************************ 00:06:47.800 11:01:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:47.800 * Looking for test storage... 00:06:47.800 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:47.800 11:01:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.800 11:01:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.800 11:01:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:48.060 11:01:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:48.060 11:01:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:48.060 11:01:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:48.060 11:01:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:48.060 11:01:08 -- scripts/common.sh@335 -- # IFS=.-: 00:06:48.060 11:01:08 -- scripts/common.sh@335 -- # read -ra ver1 00:06:48.060 11:01:08 -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.060 11:01:08 -- scripts/common.sh@336 -- # read -ra ver2 00:06:48.060 11:01:08 -- scripts/common.sh@337 -- # local 'op=<' 00:06:48.060 11:01:08 -- scripts/common.sh@339 -- # ver1_l=2 00:06:48.060 11:01:08 -- scripts/common.sh@340 -- # ver2_l=1 00:06:48.060 11:01:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:48.060 11:01:08 -- scripts/common.sh@343 -- # case "$op" in 00:06:48.060 11:01:08 -- scripts/common.sh@344 -- # : 1 00:06:48.060 11:01:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:48.060 11:01:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.060 11:01:08 -- scripts/common.sh@364 -- # decimal 1 00:06:48.060 11:01:08 -- scripts/common.sh@352 -- # local d=1 00:06:48.060 11:01:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.060 11:01:08 -- scripts/common.sh@354 -- # echo 1 00:06:48.060 11:01:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:48.060 11:01:08 -- scripts/common.sh@365 -- # decimal 2 00:06:48.060 11:01:08 -- scripts/common.sh@352 -- # local d=2 00:06:48.060 11:01:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.060 11:01:08 -- scripts/common.sh@354 -- # echo 2 00:06:48.060 11:01:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:48.060 11:01:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:48.060 11:01:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:48.060 11:01:08 -- scripts/common.sh@367 -- # return 0 00:06:48.060 11:01:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.060 11:01:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:48.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.060 --rc genhtml_branch_coverage=1 00:06:48.060 --rc genhtml_function_coverage=1 00:06:48.060 --rc genhtml_legend=1 00:06:48.060 --rc geninfo_all_blocks=1 00:06:48.060 --rc geninfo_unexecuted_blocks=1 00:06:48.060 00:06:48.060 ' 00:06:48.060 11:01:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:48.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.061 --rc genhtml_branch_coverage=1 00:06:48.061 --rc genhtml_function_coverage=1 00:06:48.061 --rc genhtml_legend=1 00:06:48.061 --rc geninfo_all_blocks=1 00:06:48.061 --rc geninfo_unexecuted_blocks=1 00:06:48.061 00:06:48.061 ' 00:06:48.061 11:01:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:48.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.061 --rc genhtml_branch_coverage=1 00:06:48.061 --rc genhtml_function_coverage=1 00:06:48.061 --rc genhtml_legend=1 00:06:48.061 --rc geninfo_all_blocks=1 00:06:48.061 --rc geninfo_unexecuted_blocks=1 00:06:48.061 00:06:48.061 ' 00:06:48.061 11:01:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:48.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.061 --rc genhtml_branch_coverage=1 00:06:48.061 --rc genhtml_function_coverage=1 00:06:48.061 --rc genhtml_legend=1 00:06:48.061 --rc geninfo_all_blocks=1 00:06:48.061 --rc geninfo_unexecuted_blocks=1 00:06:48.061 00:06:48.061 ' 00:06:48.061 11:01:08 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.061 11:01:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:48.061 11:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.061 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:06:48.061 ************************************ 00:06:48.061 START TEST thread_poller_perf 00:06:48.061 ************************************ 00:06:48.061 11:01:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.061 [2024-12-13 11:01:08.421516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.061 [2024-12-13 11:01:08.421570] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457643 ] 00:06:48.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.061 [2024-12-13 11:01:08.472114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.061 [2024-12-13 11:01:08.537562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.061 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.441 [2024-12-13T10:01:10.010Z] ====================================== 00:06:49.441 [2024-12-13T10:01:10.010Z] busy:2708001052 (cyc) 00:06:49.441 [2024-12-13T10:01:10.010Z] total_run_count: 433000 00:06:49.441 [2024-12-13T10:01:10.010Z] tsc_hz: 2700000000 (cyc) 00:06:49.441 [2024-12-13T10:01:10.010Z] ====================================== 00:06:49.441 [2024-12-13T10:01:10.010Z] poller_cost: 6254 (cyc), 2316 (nsec) 00:06:49.441 00:06:49.441 real 0m1.213s 00:06:49.441 user 0m1.149s 00:06:49.441 sys 0m0.061s 00:06:49.441 11:01:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.441 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:49.441 ************************************ 00:06:49.441 END TEST thread_poller_perf 00:06:49.441 ************************************ 00:06:49.441 11:01:09 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.441 11:01:09 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:49.441 11:01:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.441 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:49.441 ************************************ 00:06:49.441 START TEST thread_poller_perf 00:06:49.441 ************************************ 00:06:49.441 11:01:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.441 [2024-12-13 11:01:09.679210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.441 [2024-12-13 11:01:09.679296] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457836 ] 00:06:49.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.441 [2024-12-13 11:01:09.735131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.441 [2024-12-13 11:01:09.798892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.441 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:50.376 [2024-12-13T10:01:10.945Z] ====================================== 00:06:50.376 [2024-12-13T10:01:10.945Z] busy:2702369332 (cyc) 00:06:50.376 [2024-12-13T10:01:10.945Z] total_run_count: 5784000 00:06:50.376 [2024-12-13T10:01:10.945Z] tsc_hz: 2700000000 (cyc) 00:06:50.376 [2024-12-13T10:01:10.945Z] ====================================== 00:06:50.376 [2024-12-13T10:01:10.945Z] poller_cost: 467 (cyc), 172 (nsec) 00:06:50.376 00:06:50.376 real 0m1.228s 00:06:50.376 user 0m1.157s 00:06:50.376 sys 0m0.066s 00:06:50.376 11:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.376 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.376 ************************************ 00:06:50.376 END TEST thread_poller_perf 00:06:50.376 ************************************ 00:06:50.376 11:01:10 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:50.376 00:06:50.376 real 0m2.644s 00:06:50.376 user 0m2.411s 00:06:50.376 sys 0m0.241s 00:06:50.376 11:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.376 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.376 ************************************ 00:06:50.376 END TEST thread 00:06:50.376 ************************************ 00:06:50.635 11:01:10 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:50.635 11:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.635 11:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.635 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.635 ************************************ 00:06:50.635 START TEST accel 00:06:50.635 ************************************ 00:06:50.635 11:01:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:50.635 * Looking for test storage... 00:06:50.635 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:50.635 11:01:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.635 11:01:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.635 11:01:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.635 11:01:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.635 11:01:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.635 11:01:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.635 11:01:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.636 11:01:11 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.636 11:01:11 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.636 11:01:11 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.636 11:01:11 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.636 11:01:11 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.636 11:01:11 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.636 11:01:11 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.636 11:01:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.636 11:01:11 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.636 11:01:11 -- scripts/common.sh@344 -- # : 1 00:06:50.636 11:01:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.636 11:01:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.636 11:01:11 -- scripts/common.sh@364 -- # decimal 1 00:06:50.636 11:01:11 -- scripts/common.sh@352 -- # local d=1 00:06:50.636 11:01:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.636 11:01:11 -- scripts/common.sh@354 -- # echo 1 00:06:50.636 11:01:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.636 11:01:11 -- scripts/common.sh@365 -- # decimal 2 00:06:50.636 11:01:11 -- scripts/common.sh@352 -- # local d=2 00:06:50.636 11:01:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.636 11:01:11 -- scripts/common.sh@354 -- # echo 2 00:06:50.636 11:01:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.636 11:01:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.636 11:01:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.636 11:01:11 -- scripts/common.sh@367 -- # return 0 00:06:50.636 11:01:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.636 11:01:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.636 --rc genhtml_branch_coverage=1 00:06:50.636 --rc genhtml_function_coverage=1 00:06:50.636 --rc genhtml_legend=1 00:06:50.636 --rc geninfo_all_blocks=1 00:06:50.636 --rc geninfo_unexecuted_blocks=1 00:06:50.636 00:06:50.636 ' 00:06:50.636 11:01:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.636 --rc genhtml_branch_coverage=1 00:06:50.636 --rc genhtml_function_coverage=1 00:06:50.636 --rc genhtml_legend=1 00:06:50.636 --rc geninfo_all_blocks=1 00:06:50.636 --rc geninfo_unexecuted_blocks=1 00:06:50.636 00:06:50.636 ' 00:06:50.636 11:01:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.636 --rc genhtml_branch_coverage=1 00:06:50.636 --rc genhtml_function_coverage=1 00:06:50.636 --rc genhtml_legend=1 00:06:50.636 --rc geninfo_all_blocks=1 00:06:50.636 --rc geninfo_unexecuted_blocks=1 00:06:50.636 00:06:50.636 ' 00:06:50.636 11:01:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.636 --rc genhtml_branch_coverage=1 00:06:50.636 --rc genhtml_function_coverage=1 00:06:50.636 --rc genhtml_legend=1 00:06:50.636 --rc geninfo_all_blocks=1 00:06:50.636 --rc geninfo_unexecuted_blocks=1 00:06:50.636 00:06:50.636 ' 00:06:50.636 11:01:11 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:50.636 11:01:11 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:50.636 11:01:11 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.636 11:01:11 -- accel/accel.sh@59 -- # spdk_tgt_pid=1458199 00:06:50.636 11:01:11 -- accel/accel.sh@60 -- # waitforlisten 1458199 00:06:50.636 11:01:11 -- common/autotest_common.sh@829 -- # '[' -z 1458199 ']' 00:06:50.636 11:01:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.636 11:01:11 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:50.636 11:01:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.636 11:01:11 -- accel/accel.sh@58 -- # build_accel_config 00:06:50.636 11:01:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.636 11:01:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.636 11:01:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.636 11:01:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.636 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.636 11:01:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.636 11:01:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.636 11:01:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.636 11:01:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.637 11:01:11 -- accel/accel.sh@42 -- # jq -r . 00:06:50.637 [2024-12-13 11:01:11.157272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.637 [2024-12-13 11:01:11.157338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458199 ] 00:06:50.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.897 [2024-12-13 11:01:11.206699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.897 [2024-12-13 11:01:11.277500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.897 [2024-12-13 11:01:11.277608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.465 11:01:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.465 11:01:11 -- common/autotest_common.sh@862 -- # return 0 00:06:51.465 11:01:11 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:51.465 11:01:11 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:51.465 11:01:11 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:51.465 11:01:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.465 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:51.465 11:01:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # IFS== 00:06:51.465 11:01:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:51.465 11:01:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:51.465 11:01:11 -- accel/accel.sh@67 -- # killprocess 1458199 00:06:51.465 11:01:11 -- common/autotest_common.sh@936 -- # '[' -z 1458199 ']' 00:06:51.465 11:01:11 -- common/autotest_common.sh@940 -- # kill -0 1458199 00:06:51.465 11:01:11 -- common/autotest_common.sh@941 -- # uname 00:06:51.465 11:01:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.465 11:01:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1458199 00:06:51.465 11:01:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.465 11:01:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.465 11:01:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1458199' 00:06:51.465 killing process with pid 1458199 00:06:51.465 11:01:12 -- common/autotest_common.sh@955 -- # kill 1458199 00:06:51.465 11:01:12 -- common/autotest_common.sh@960 -- # wait 1458199 00:06:52.033 11:01:12 -- accel/accel.sh@68 -- # trap - ERR 00:06:52.033 11:01:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:52.033 11:01:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:52.033 11:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.033 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.033 11:01:12 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:52.033 11:01:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:52.033 11:01:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.033 11:01:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.033 11:01:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.033 11:01:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.033 11:01:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.033 11:01:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.033 11:01:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.033 11:01:12 -- accel/accel.sh@42 -- # jq -r . 00:06:52.033 11:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.033 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.033 11:01:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:52.033 11:01:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.033 11:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.033 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.033 ************************************ 00:06:52.033 START TEST accel_missing_filename 00:06:52.033 ************************************ 00:06:52.033 11:01:12 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:52.033 11:01:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.033 11:01:12 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:52.033 11:01:12 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.033 11:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.033 11:01:12 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.033 11:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.033 11:01:12 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:52.033 11:01:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:52.033 11:01:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.033 11:01:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.033 11:01:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.033 11:01:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.033 11:01:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.033 11:01:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.033 11:01:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.033 11:01:12 -- accel/accel.sh@42 -- # jq -r . 00:06:52.033 [2024-12-13 11:01:12.451575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.033 [2024-12-13 11:01:12.451652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458426 ] 00:06:52.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.033 [2024-12-13 11:01:12.507257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.033 [2024-12-13 11:01:12.571879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.293 [2024-12-13 11:01:12.612152] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.293 [2024-12-13 11:01:12.671958] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:52.293 A filename is required. 00:06:52.293 11:01:12 -- common/autotest_common.sh@653 -- # es=234 00:06:52.293 11:01:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.293 11:01:12 -- common/autotest_common.sh@662 -- # es=106 00:06:52.293 11:01:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.293 11:01:12 -- common/autotest_common.sh@670 -- # es=1 00:06:52.293 11:01:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.293 00:06:52.293 real 0m0.336s 00:06:52.293 user 0m0.259s 00:06:52.293 sys 0m0.115s 00:06:52.293 11:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.293 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.293 ************************************ 00:06:52.293 END TEST accel_missing_filename 00:06:52.293 ************************************ 00:06:52.293 11:01:12 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:52.293 11:01:12 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:52.293 11:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.293 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.293 ************************************ 00:06:52.293 START TEST accel_compress_verify 00:06:52.293 ************************************ 00:06:52.293 11:01:12 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:52.293 11:01:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.293 11:01:12 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:52.293 11:01:12 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.293 11:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.293 11:01:12 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.293 11:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.293 11:01:12 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:52.293 11:01:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:52.293 11:01:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.293 11:01:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.293 11:01:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.293 11:01:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.293 11:01:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.293 11:01:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.293 11:01:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.293 11:01:12 -- accel/accel.sh@42 -- # jq -r . 00:06:52.293 [2024-12-13 11:01:12.826075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.293 [2024-12-13 11:01:12.826147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458598 ] 00:06:52.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.552 [2024-12-13 11:01:12.882015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.552 [2024-12-13 11:01:12.943970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.552 [2024-12-13 11:01:12.983767] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.552 [2024-12-13 11:01:13.043024] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:52.812 00:06:52.812 Compression does not support the verify option, aborting. 00:06:52.812 11:01:13 -- common/autotest_common.sh@653 -- # es=161 00:06:52.812 11:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.812 11:01:13 -- common/autotest_common.sh@662 -- # es=33 00:06:52.812 11:01:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.812 11:01:13 -- common/autotest_common.sh@670 -- # es=1 00:06:52.812 11:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.812 00:06:52.812 real 0m0.333s 00:06:52.812 user 0m0.264s 00:06:52.812 sys 0m0.108s 00:06:52.812 11:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.812 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.812 ************************************ 00:06:52.812 END TEST accel_compress_verify 00:06:52.812 ************************************ 00:06:52.812 11:01:13 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:52.812 11:01:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.812 11:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.812 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.812 ************************************ 00:06:52.812 START TEST accel_wrong_workload 00:06:52.812 ************************************ 00:06:52.812 11:01:13 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:52.812 11:01:13 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.812 11:01:13 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:52.812 11:01:13 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.812 11:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.812 11:01:13 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.812 11:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.812 11:01:13 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:52.812 11:01:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:52.812 11:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.812 11:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.812 11:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.812 11:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.812 11:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.812 11:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.812 11:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.812 11:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:52.812 Unsupported workload type: foobar 00:06:52.812 [2024-12-13 11:01:13.196428] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:52.812 accel_perf options: 00:06:52.812 [-h help message] 00:06:52.812 [-q queue depth per core] 00:06:52.812 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.812 [-T number of threads per core 00:06:52.812 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.812 [-t time in seconds] 00:06:52.812 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.812 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.812 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.812 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.812 [-S for crc32c workload, use this seed value (default 0) 00:06:52.812 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.812 [-f for fill workload, use this BYTE value (default 255) 00:06:52.812 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.812 [-y verify result if this switch is on] 00:06:52.812 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.812 Can be used to spread operations across a wider range of memory. 00:06:52.812 11:01:13 -- common/autotest_common.sh@653 -- # es=1 00:06:52.812 11:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.812 11:01:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.812 11:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.812 00:06:52.812 real 0m0.034s 00:06:52.812 user 0m0.019s 00:06:52.812 sys 0m0.015s 00:06:52.812 11:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.812 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.812 ************************************ 00:06:52.812 END TEST accel_wrong_workload 00:06:52.812 ************************************ 00:06:52.812 Error: writing output failed: Broken pipe 00:06:52.812 11:01:13 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.812 11:01:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:52.812 11:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.812 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.812 ************************************ 00:06:52.812 START TEST accel_negative_buffers 00:06:52.812 ************************************ 00:06:52.812 11:01:13 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.812 11:01:13 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.812 11:01:13 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:52.812 11:01:13 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.812 11:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.812 11:01:13 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.812 11:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.812 11:01:13 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:52.812 11:01:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:52.812 11:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.812 11:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.812 11:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.812 11:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.812 11:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.812 11:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.812 11:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.812 11:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:52.812 -x option must be non-negative. 00:06:52.812 [2024-12-13 11:01:13.268511] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:52.812 accel_perf options: 00:06:52.812 [-h help message] 00:06:52.812 [-q queue depth per core] 00:06:52.812 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.812 [-T number of threads per core 00:06:52.812 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.812 [-t time in seconds] 00:06:52.812 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.812 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.812 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.812 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.812 [-S for crc32c workload, use this seed value (default 0) 00:06:52.812 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.812 [-f for fill workload, use this BYTE value (default 255) 00:06:52.812 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.812 [-y verify result if this switch is on] 00:06:52.812 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.812 Can be used to spread operations across a wider range of memory. 00:06:52.812 11:01:13 -- common/autotest_common.sh@653 -- # es=1 00:06:52.812 11:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.813 11:01:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.813 11:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.813 00:06:52.813 real 0m0.035s 00:06:52.813 user 0m0.020s 00:06:52.813 sys 0m0.015s 00:06:52.813 11:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.813 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.813 ************************************ 00:06:52.813 END TEST accel_negative_buffers 00:06:52.813 ************************************ 00:06:52.813 Error: writing output failed: Broken pipe 00:06:52.813 11:01:13 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:52.813 11:01:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.813 11:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.813 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:52.813 ************************************ 00:06:52.813 START TEST accel_crc32c 00:06:52.813 ************************************ 00:06:52.813 11:01:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:52.813 11:01:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.813 11:01:13 -- accel/accel.sh@17 -- # local accel_module 00:06:52.813 11:01:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.813 11:01:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.813 11:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.813 11:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.813 11:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.813 11:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.813 11:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.813 11:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.813 11:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.813 11:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:52.813 [2024-12-13 11:01:13.329793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.813 [2024-12-13 11:01:13.329843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458657 ] 00:06:52.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.813 [2024-12-13 11:01:13.376977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.071 [2024-12-13 11:01:13.445234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.449 11:01:14 -- accel/accel.sh@18 -- # out=' 00:06:54.449 SPDK Configuration: 00:06:54.449 Core mask: 0x1 00:06:54.449 00:06:54.449 Accel Perf Configuration: 00:06:54.449 Workload Type: crc32c 00:06:54.449 CRC-32C seed: 32 00:06:54.449 Transfer size: 4096 bytes 00:06:54.449 Vector count 1 00:06:54.449 Module: software 00:06:54.449 Queue depth: 32 00:06:54.449 Allocate depth: 32 00:06:54.449 # threads/core: 1 00:06:54.449 Run time: 1 seconds 00:06:54.449 Verify: Yes 00:06:54.449 00:06:54.449 Running for 1 seconds... 00:06:54.449 00:06:54.449 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.449 ------------------------------------------------------------------------------------ 00:06:54.449 0,0 630720/s 2463 MiB/s 0 0 00:06:54.449 ==================================================================================== 00:06:54.449 Total 630720/s 2463 MiB/s 0 0' 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:54.449 11:01:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:54.449 11:01:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.449 11:01:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.449 11:01:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.449 11:01:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.449 11:01:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.449 11:01:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.449 11:01:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.449 11:01:14 -- accel/accel.sh@42 -- # jq -r . 00:06:54.449 [2024-12-13 11:01:14.660049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.449 [2024-12-13 11:01:14.660127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458922 ] 00:06:54.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.449 [2024-12-13 11:01:14.713956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.449 [2024-12-13 11:01:14.779096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=0x1 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=crc32c 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=32 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=software 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=32 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=32 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=1 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val=Yes 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.449 11:01:14 -- accel/accel.sh@21 -- # val= 00:06:54.449 11:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.449 11:01:14 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@21 -- # val= 00:06:55.830 11:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@21 -- # val= 00:06:55.830 11:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@21 -- # val= 00:06:55.830 11:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@21 -- # val= 00:06:55.830 11:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@21 -- # val= 00:06:55.830 11:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@21 -- # val= 00:06:55.830 11:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:55.830 11:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:55.830 11:01:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.830 11:01:15 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:55.830 11:01:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.830 00:06:55.830 real 0m2.657s 00:06:55.830 user 0m2.451s 00:06:55.830 sys 0m0.214s 00:06:55.830 11:01:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.830 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 ************************************ 00:06:55.830 END TEST accel_crc32c 00:06:55.830 ************************************ 00:06:55.830 11:01:16 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.830 11:01:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.830 11:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.830 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 ************************************ 00:06:55.830 START TEST accel_crc32c_C2 00:06:55.830 ************************************ 00:06:55.830 11:01:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.830 11:01:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.830 11:01:16 -- accel/accel.sh@17 -- # local accel_module 00:06:55.830 11:01:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.830 11:01:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.830 11:01:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.830 11:01:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.830 11:01:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.830 11:01:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.830 11:01:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.830 11:01:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.830 11:01:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.830 11:01:16 -- accel/accel.sh@42 -- # jq -r . 00:06:55.830 [2024-12-13 11:01:16.036059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.830 [2024-12-13 11:01:16.036123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459209 ] 00:06:55.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.830 [2024-12-13 11:01:16.087890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.830 [2024-12-13 11:01:16.151551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.207 11:01:17 -- accel/accel.sh@18 -- # out=' 00:06:57.207 SPDK Configuration: 00:06:57.207 Core mask: 0x1 00:06:57.207 00:06:57.207 Accel Perf Configuration: 00:06:57.207 Workload Type: crc32c 00:06:57.207 CRC-32C seed: 0 00:06:57.207 Transfer size: 4096 bytes 00:06:57.207 Vector count 2 00:06:57.207 Module: software 00:06:57.207 Queue depth: 32 00:06:57.207 Allocate depth: 32 00:06:57.207 # threads/core: 1 00:06:57.207 Run time: 1 seconds 00:06:57.208 Verify: Yes 00:06:57.208 00:06:57.208 Running for 1 seconds... 00:06:57.208 00:06:57.208 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.208 ------------------------------------------------------------------------------------ 00:06:57.208 0,0 498848/s 3897 MiB/s 0 0 00:06:57.208 ==================================================================================== 00:06:57.208 Total 498848/s 1948 MiB/s 0 0' 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.208 11:01:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.208 11:01:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.208 11:01:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.208 11:01:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.208 11:01:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.208 11:01:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.208 11:01:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.208 11:01:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.208 11:01:17 -- accel/accel.sh@42 -- # jq -r . 00:06:57.208 [2024-12-13 11:01:17.365150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.208 [2024-12-13 11:01:17.365215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459476 ] 00:06:57.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.208 [2024-12-13 11:01:17.417180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.208 [2024-12-13 11:01:17.479608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=0x1 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=crc32c 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=0 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=software 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=32 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=32 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=1 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val=Yes 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:57.208 11:01:17 -- accel/accel.sh@21 -- # val= 00:06:57.208 11:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:57.208 11:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.144 11:01:18 -- accel/accel.sh@21 -- # val= 00:06:58.144 11:01:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.144 11:01:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.144 11:01:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.144 11:01:18 -- accel/accel.sh@21 -- # val= 00:06:58.144 11:01:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.145 11:01:18 -- accel/accel.sh@21 -- # val= 00:06:58.145 11:01:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.145 11:01:18 -- accel/accel.sh@21 -- # val= 00:06:58.145 11:01:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.145 11:01:18 -- accel/accel.sh@21 -- # val= 00:06:58.145 11:01:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.145 11:01:18 -- accel/accel.sh@21 -- # val= 00:06:58.145 11:01:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.145 11:01:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.145 11:01:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.145 11:01:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:58.145 11:01:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.145 00:06:58.145 real 0m2.661s 00:06:58.145 user 0m2.460s 00:06:58.145 sys 0m0.208s 00:06:58.145 11:01:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.145 11:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:58.145 ************************************ 00:06:58.145 END TEST accel_crc32c_C2 00:06:58.145 ************************************ 00:06:58.145 11:01:18 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:58.145 11:01:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:58.145 11:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.145 11:01:18 -- common/autotest_common.sh@10 -- # set +x 00:06:58.145 ************************************ 00:06:58.145 START TEST accel_copy 00:06:58.145 ************************************ 00:06:58.145 11:01:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:58.145 11:01:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.145 11:01:18 -- accel/accel.sh@17 -- # local accel_module 00:06:58.145 11:01:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:58.403 11:01:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:58.403 11:01:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.403 11:01:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.403 11:01:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.403 11:01:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.403 11:01:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.403 11:01:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.403 11:01:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.403 11:01:18 -- accel/accel.sh@42 -- # jq -r . 00:06:58.403 [2024-12-13 11:01:18.737150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.403 [2024-12-13 11:01:18.737216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459759 ] 00:06:58.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.403 [2024-12-13 11:01:18.808736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.403 [2024-12-13 11:01:18.873347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.779 11:01:20 -- accel/accel.sh@18 -- # out=' 00:06:59.779 SPDK Configuration: 00:06:59.779 Core mask: 0x1 00:06:59.779 00:06:59.779 Accel Perf Configuration: 00:06:59.779 Workload Type: copy 00:06:59.779 Transfer size: 4096 bytes 00:06:59.779 Vector count 1 00:06:59.779 Module: software 00:06:59.780 Queue depth: 32 00:06:59.780 Allocate depth: 32 00:06:59.780 # threads/core: 1 00:06:59.780 Run time: 1 seconds 00:06:59.780 Verify: Yes 00:06:59.780 00:06:59.780 Running for 1 seconds... 00:06:59.780 00:06:59.780 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.780 ------------------------------------------------------------------------------------ 00:06:59.780 0,0 468160/s 1828 MiB/s 0 0 00:06:59.780 ==================================================================================== 00:06:59.780 Total 468160/s 1828 MiB/s 0 0' 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.780 11:01:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.780 11:01:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.780 11:01:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.780 11:01:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.780 11:01:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.780 11:01:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.780 11:01:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.780 11:01:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.780 11:01:20 -- accel/accel.sh@42 -- # jq -r . 00:06:59.780 [2024-12-13 11:01:20.090335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.780 [2024-12-13 11:01:20.090396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460031 ] 00:06:59.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.780 [2024-12-13 11:01:20.142566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.780 [2024-12-13 11:01:20.208046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=0x1 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=copy 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=software 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=32 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=32 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=1 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val=Yes 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.780 11:01:20 -- accel/accel.sh@21 -- # val= 00:06:59.780 11:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:59.780 11:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@21 -- # val= 00:07:01.155 11:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@21 -- # val= 00:07:01.155 11:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@21 -- # val= 00:07:01.155 11:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@21 -- # val= 00:07:01.155 11:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@21 -- # val= 00:07:01.155 11:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@21 -- # val= 00:07:01.155 11:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.155 11:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.155 11:01:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.155 11:01:21 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:01.155 11:01:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.155 00:07:01.155 real 0m2.694s 00:07:01.155 user 0m2.467s 00:07:01.155 sys 0m0.232s 00:07:01.155 11:01:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.155 11:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:01.155 ************************************ 00:07:01.155 END TEST accel_copy 00:07:01.155 ************************************ 00:07:01.155 11:01:21 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.155 11:01:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:01.155 11:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.155 11:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:01.155 ************************************ 00:07:01.155 START TEST accel_fill 00:07:01.155 ************************************ 00:07:01.155 11:01:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.155 11:01:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.155 11:01:21 -- accel/accel.sh@17 -- # local accel_module 00:07:01.155 11:01:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.155 11:01:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.155 11:01:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.155 11:01:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.155 11:01:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.155 11:01:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.155 11:01:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.155 11:01:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.155 11:01:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.155 11:01:21 -- accel/accel.sh@42 -- # jq -r . 00:07:01.155 [2024-12-13 11:01:21.468092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.155 [2024-12-13 11:01:21.468149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460314 ] 00:07:01.155 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.155 [2024-12-13 11:01:21.520396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.155 [2024-12-13 11:01:21.584356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.532 11:01:22 -- accel/accel.sh@18 -- # out=' 00:07:02.532 SPDK Configuration: 00:07:02.532 Core mask: 0x1 00:07:02.532 00:07:02.532 Accel Perf Configuration: 00:07:02.532 Workload Type: fill 00:07:02.532 Fill pattern: 0x80 00:07:02.532 Transfer size: 4096 bytes 00:07:02.532 Vector count 1 00:07:02.532 Module: software 00:07:02.532 Queue depth: 64 00:07:02.532 Allocate depth: 64 00:07:02.532 # threads/core: 1 00:07:02.532 Run time: 1 seconds 00:07:02.532 Verify: Yes 00:07:02.532 00:07:02.532 Running for 1 seconds... 00:07:02.532 00:07:02.532 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.532 ------------------------------------------------------------------------------------ 00:07:02.532 0,0 734016/s 2867 MiB/s 0 0 00:07:02.532 ==================================================================================== 00:07:02.532 Total 734016/s 2867 MiB/s 0 0' 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.532 11:01:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.532 11:01:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.532 11:01:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.532 11:01:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.532 11:01:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.532 11:01:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.532 11:01:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.532 11:01:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.532 11:01:22 -- accel/accel.sh@42 -- # jq -r . 00:07:02.532 [2024-12-13 11:01:22.800284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.532 [2024-12-13 11:01:22.800344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460568 ] 00:07:02.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.532 [2024-12-13 11:01:22.852395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.532 [2024-12-13 11:01:22.914985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=0x1 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=fill 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=0x80 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=software 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=64 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=64 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=1 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val=Yes 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:02.532 11:01:22 -- accel/accel.sh@21 -- # val= 00:07:02.532 11:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # IFS=: 00:07:02.532 11:01:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@21 -- # val= 00:07:03.910 11:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@21 -- # val= 00:07:03.910 11:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@21 -- # val= 00:07:03.910 11:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@21 -- # val= 00:07:03.910 11:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@21 -- # val= 00:07:03.910 11:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@21 -- # val= 00:07:03.910 11:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:03.910 11:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:03.910 11:01:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.910 11:01:24 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:03.910 11:01:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.910 00:07:03.910 real 0m2.667s 00:07:03.910 user 0m2.465s 00:07:03.910 sys 0m0.211s 00:07:03.910 11:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.910 11:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:03.910 ************************************ 00:07:03.910 END TEST accel_fill 00:07:03.910 ************************************ 00:07:03.910 11:01:24 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:03.910 11:01:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:03.910 11:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.910 11:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:03.910 ************************************ 00:07:03.910 START TEST accel_copy_crc32c 00:07:03.910 ************************************ 00:07:03.910 11:01:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.910 11:01:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.910 11:01:24 -- accel/accel.sh@17 -- # local accel_module 00:07:03.910 11:01:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.910 11:01:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.910 11:01:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.910 11:01:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.910 11:01:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.910 11:01:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.910 11:01:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.910 11:01:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.910 11:01:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.910 11:01:24 -- accel/accel.sh@42 -- # jq -r . 00:07:03.910 [2024-12-13 11:01:24.176567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.910 [2024-12-13 11:01:24.176630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460811 ] 00:07:03.910 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.910 [2024-12-13 11:01:24.229761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.910 [2024-12-13 11:01:24.294199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.287 11:01:25 -- accel/accel.sh@18 -- # out=' 00:07:05.287 SPDK Configuration: 00:07:05.287 Core mask: 0x1 00:07:05.287 00:07:05.287 Accel Perf Configuration: 00:07:05.287 Workload Type: copy_crc32c 00:07:05.287 CRC-32C seed: 0 00:07:05.287 Vector size: 4096 bytes 00:07:05.287 Transfer size: 4096 bytes 00:07:05.287 Vector count 1 00:07:05.287 Module: software 00:07:05.287 Queue depth: 32 00:07:05.287 Allocate depth: 32 00:07:05.287 # threads/core: 1 00:07:05.287 Run time: 1 seconds 00:07:05.287 Verify: Yes 00:07:05.287 00:07:05.287 Running for 1 seconds... 00:07:05.287 00:07:05.287 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.287 ------------------------------------------------------------------------------------ 00:07:05.287 0,0 359488/s 1404 MiB/s 0 0 00:07:05.287 ==================================================================================== 00:07:05.287 Total 359488/s 1404 MiB/s 0 0' 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.287 11:01:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.287 11:01:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.287 11:01:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.287 11:01:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.287 11:01:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.287 11:01:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.287 11:01:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.287 11:01:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.287 11:01:25 -- accel/accel.sh@42 -- # jq -r . 00:07:05.287 [2024-12-13 11:01:25.510614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.287 [2024-12-13 11:01:25.510693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461012 ] 00:07:05.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.287 [2024-12-13 11:01:25.562982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.287 [2024-12-13 11:01:25.625900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=0x1 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=0 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=software 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=32 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=32 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=1 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val=Yes 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.287 11:01:25 -- accel/accel.sh@21 -- # val= 00:07:05.287 11:01:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.287 11:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@21 -- # val= 00:07:06.666 11:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@21 -- # val= 00:07:06.666 11:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@21 -- # val= 00:07:06.666 11:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@21 -- # val= 00:07:06.666 11:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@21 -- # val= 00:07:06.666 11:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@21 -- # val= 00:07:06.666 11:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:06.666 11:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:06.666 11:01:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.666 11:01:26 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:06.666 11:01:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.666 00:07:06.666 real 0m2.672s 00:07:06.666 user 0m2.453s 00:07:06.666 sys 0m0.229s 00:07:06.666 11:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.666 11:01:26 -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 ************************************ 00:07:06.666 END TEST accel_copy_crc32c 00:07:06.666 ************************************ 00:07:06.666 11:01:26 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.666 11:01:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:06.666 11:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.666 11:01:26 -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 ************************************ 00:07:06.666 START TEST accel_copy_crc32c_C2 00:07:06.666 ************************************ 00:07:06.666 11:01:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.666 11:01:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.666 11:01:26 -- accel/accel.sh@17 -- # local accel_module 00:07:06.666 11:01:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.666 11:01:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.666 11:01:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.666 11:01:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.666 11:01:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.666 11:01:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.666 11:01:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.666 11:01:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.666 11:01:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.666 11:01:26 -- accel/accel.sh@42 -- # jq -r . 00:07:06.666 [2024-12-13 11:01:26.889010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.666 [2024-12-13 11:01:26.889071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461250 ] 00:07:06.666 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.666 [2024-12-13 11:01:26.940924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.666 [2024-12-13 11:01:27.005728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.044 11:01:28 -- accel/accel.sh@18 -- # out=' 00:07:08.044 SPDK Configuration: 00:07:08.044 Core mask: 0x1 00:07:08.044 00:07:08.044 Accel Perf Configuration: 00:07:08.044 Workload Type: copy_crc32c 00:07:08.044 CRC-32C seed: 0 00:07:08.044 Vector size: 4096 bytes 00:07:08.044 Transfer size: 8192 bytes 00:07:08.044 Vector count 2 00:07:08.044 Module: software 00:07:08.044 Queue depth: 32 00:07:08.044 Allocate depth: 32 00:07:08.044 # threads/core: 1 00:07:08.044 Run time: 1 seconds 00:07:08.044 Verify: Yes 00:07:08.044 00:07:08.044 Running for 1 seconds... 00:07:08.044 00:07:08.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.044 ------------------------------------------------------------------------------------ 00:07:08.044 0,0 262720/s 2052 MiB/s 0 0 00:07:08.044 ==================================================================================== 00:07:08.044 Total 262720/s 1026 MiB/s 0 0' 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.044 11:01:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:08.044 11:01:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:08.044 11:01:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.044 11:01:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.044 11:01:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.044 11:01:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.044 11:01:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.044 11:01:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.044 11:01:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.044 11:01:28 -- accel/accel.sh@42 -- # jq -r . 00:07:08.044 [2024-12-13 11:01:28.222092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.044 [2024-12-13 11:01:28.222171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461454 ] 00:07:08.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.044 [2024-12-13 11:01:28.274752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.044 [2024-12-13 11:01:28.337643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.044 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.044 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.044 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.044 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.044 11:01:28 -- accel/accel.sh@21 -- # val=0x1 00:07:08.044 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.044 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.044 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.044 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.044 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.044 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.044 11:01:28 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:08.044 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.044 11:01:28 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val=0 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val=software 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val=32 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val=32 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val=1 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val=Yes 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.045 11:01:28 -- accel/accel.sh@21 -- # val= 00:07:08.045 11:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:08.045 11:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@21 -- # val= 00:07:08.982 11:01:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@21 -- # val= 00:07:08.982 11:01:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@21 -- # val= 00:07:08.982 11:01:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@21 -- # val= 00:07:08.982 11:01:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@21 -- # val= 00:07:08.982 11:01:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@21 -- # val= 00:07:08.982 11:01:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # IFS=: 00:07:08.982 11:01:29 -- accel/accel.sh@20 -- # read -r var val 00:07:08.982 11:01:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.983 11:01:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:08.983 11:01:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.983 00:07:08.983 real 0m2.669s 00:07:08.983 user 0m2.467s 00:07:08.983 sys 0m0.211s 00:07:08.983 11:01:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.983 11:01:29 -- common/autotest_common.sh@10 -- # set +x 00:07:08.983 ************************************ 00:07:08.983 END TEST accel_copy_crc32c_C2 00:07:08.983 ************************************ 00:07:09.276 11:01:29 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:09.276 11:01:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.276 11:01:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.276 11:01:29 -- common/autotest_common.sh@10 -- # set +x 00:07:09.276 ************************************ 00:07:09.276 START TEST accel_dualcast 00:07:09.276 ************************************ 00:07:09.276 11:01:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:09.276 11:01:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.276 11:01:29 -- accel/accel.sh@17 -- # local accel_module 00:07:09.276 11:01:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:09.276 11:01:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:09.276 11:01:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.276 11:01:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.276 11:01:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.276 11:01:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.276 11:01:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.276 11:01:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.276 11:01:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.276 11:01:29 -- accel/accel.sh@42 -- # jq -r . 00:07:09.276 [2024-12-13 11:01:29.597736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.276 [2024-12-13 11:01:29.597805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461716 ] 00:07:09.276 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.276 [2024-12-13 11:01:29.654137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.276 [2024-12-13 11:01:29.717167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.761 11:01:30 -- accel/accel.sh@18 -- # out=' 00:07:10.761 SPDK Configuration: 00:07:10.761 Core mask: 0x1 00:07:10.761 00:07:10.761 Accel Perf Configuration: 00:07:10.761 Workload Type: dualcast 00:07:10.761 Transfer size: 4096 bytes 00:07:10.761 Vector count 1 00:07:10.761 Module: software 00:07:10.761 Queue depth: 32 00:07:10.761 Allocate depth: 32 00:07:10.761 # threads/core: 1 00:07:10.761 Run time: 1 seconds 00:07:10.761 Verify: Yes 00:07:10.761 00:07:10.761 Running for 1 seconds... 00:07:10.761 00:07:10.761 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.761 ------------------------------------------------------------------------------------ 00:07:10.761 0,0 530752/s 2073 MiB/s 0 0 00:07:10.761 ==================================================================================== 00:07:10.761 Total 530752/s 2073 MiB/s 0 0' 00:07:10.761 11:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:10.761 11:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:10.761 11:01:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:10.761 11:01:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:10.761 11:01:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.761 11:01:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.761 11:01:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.761 11:01:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.761 11:01:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.761 11:01:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.761 11:01:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.761 11:01:30 -- accel/accel.sh@42 -- # jq -r . 00:07:10.761 [2024-12-13 11:01:30.934265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.762 [2024-12-13 11:01:30.934350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461988 ] 00:07:10.762 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.762 [2024-12-13 11:01:30.987679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.762 [2024-12-13 11:01:31.050219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=0x1 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=dualcast 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=software 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=32 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=32 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=1 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val=Yes 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.762 11:01:31 -- accel/accel.sh@21 -- # val= 00:07:10.762 11:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.762 11:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@21 -- # val= 00:07:11.699 11:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@21 -- # val= 00:07:11.699 11:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@21 -- # val= 00:07:11.699 11:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@21 -- # val= 00:07:11.699 11:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@21 -- # val= 00:07:11.699 11:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@21 -- # val= 00:07:11.699 11:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.699 11:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.699 11:01:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.699 11:01:32 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:11.699 11:01:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.699 00:07:11.699 real 0m2.674s 00:07:11.699 user 0m2.458s 00:07:11.699 sys 0m0.223s 00:07:11.699 11:01:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.699 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:11.699 ************************************ 00:07:11.699 END TEST accel_dualcast 00:07:11.699 ************************************ 00:07:11.958 11:01:32 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:11.958 11:01:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.958 11:01:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.958 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:11.958 ************************************ 00:07:11.958 START TEST accel_compare 00:07:11.958 ************************************ 00:07:11.958 11:01:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:11.958 11:01:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.958 11:01:32 -- accel/accel.sh@17 -- # local accel_module 00:07:11.958 11:01:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:11.958 11:01:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:11.959 11:01:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.959 11:01:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.959 11:01:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.959 11:01:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.959 11:01:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.959 11:01:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.959 11:01:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.959 11:01:32 -- accel/accel.sh@42 -- # jq -r . 00:07:11.959 [2024-12-13 11:01:32.309919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.959 [2024-12-13 11:01:32.309995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462276 ] 00:07:11.959 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.959 [2024-12-13 11:01:32.364683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.959 [2024-12-13 11:01:32.426962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.337 11:01:33 -- accel/accel.sh@18 -- # out=' 00:07:13.337 SPDK Configuration: 00:07:13.337 Core mask: 0x1 00:07:13.337 00:07:13.337 Accel Perf Configuration: 00:07:13.337 Workload Type: compare 00:07:13.337 Transfer size: 4096 bytes 00:07:13.337 Vector count 1 00:07:13.337 Module: software 00:07:13.337 Queue depth: 32 00:07:13.337 Allocate depth: 32 00:07:13.337 # threads/core: 1 00:07:13.337 Run time: 1 seconds 00:07:13.337 Verify: Yes 00:07:13.337 00:07:13.337 Running for 1 seconds... 00:07:13.337 00:07:13.337 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.337 ------------------------------------------------------------------------------------ 00:07:13.337 0,0 669728/s 2616 MiB/s 0 0 00:07:13.337 ==================================================================================== 00:07:13.337 Total 669728/s 2616 MiB/s 0 0' 00:07:13.337 11:01:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.337 11:01:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.337 11:01:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.337 11:01:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.337 11:01:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.337 11:01:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.337 11:01:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.337 11:01:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.337 11:01:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.337 11:01:33 -- accel/accel.sh@42 -- # jq -r . 00:07:13.337 [2024-12-13 11:01:33.628382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.337 [2024-12-13 11:01:33.628429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462542 ] 00:07:13.337 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.337 [2024-12-13 11:01:33.677755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.337 [2024-12-13 11:01:33.739745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.337 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.337 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.337 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.337 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.337 11:01:33 -- accel/accel.sh@21 -- # val=0x1 00:07:13.337 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.337 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.337 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.337 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.337 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.337 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.337 11:01:33 -- accel/accel.sh@21 -- # val=compare 00:07:13.337 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val=software 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val=32 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val=32 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val=1 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val=Yes 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.338 11:01:33 -- accel/accel.sh@21 -- # val= 00:07:13.338 11:01:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.338 11:01:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.717 11:01:34 -- accel/accel.sh@21 -- # val= 00:07:14.717 11:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.717 11:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.717 11:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.717 11:01:34 -- accel/accel.sh@21 -- # val= 00:07:14.717 11:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.718 11:01:34 -- accel/accel.sh@21 -- # val= 00:07:14.718 11:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.718 11:01:34 -- accel/accel.sh@21 -- # val= 00:07:14.718 11:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.718 11:01:34 -- accel/accel.sh@21 -- # val= 00:07:14.718 11:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.718 11:01:34 -- accel/accel.sh@21 -- # val= 00:07:14.718 11:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.718 11:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.718 11:01:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.718 11:01:34 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:14.718 11:01:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.718 00:07:14.718 real 0m2.651s 00:07:14.718 user 0m2.446s 00:07:14.718 sys 0m0.212s 00:07:14.718 11:01:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.718 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.718 ************************************ 00:07:14.718 END TEST accel_compare 00:07:14.718 ************************************ 00:07:14.718 11:01:34 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:14.718 11:01:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:14.718 11:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.718 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.718 ************************************ 00:07:14.718 START TEST accel_xor 00:07:14.718 ************************************ 00:07:14.718 11:01:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:14.718 11:01:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.718 11:01:34 -- accel/accel.sh@17 -- # local accel_module 00:07:14.718 11:01:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:14.718 11:01:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:14.718 11:01:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.718 11:01:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.718 11:01:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.718 11:01:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.718 11:01:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.718 11:01:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.718 11:01:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.718 11:01:34 -- accel/accel.sh@42 -- # jq -r . 00:07:14.718 [2024-12-13 11:01:34.998094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.718 [2024-12-13 11:01:34.998162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462821 ] 00:07:14.718 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.718 [2024-12-13 11:01:35.051076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.718 [2024-12-13 11:01:35.114641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.096 11:01:36 -- accel/accel.sh@18 -- # out=' 00:07:16.097 SPDK Configuration: 00:07:16.097 Core mask: 0x1 00:07:16.097 00:07:16.097 Accel Perf Configuration: 00:07:16.097 Workload Type: xor 00:07:16.097 Source buffers: 2 00:07:16.097 Transfer size: 4096 bytes 00:07:16.097 Vector count 1 00:07:16.097 Module: software 00:07:16.097 Queue depth: 32 00:07:16.097 Allocate depth: 32 00:07:16.097 # threads/core: 1 00:07:16.097 Run time: 1 seconds 00:07:16.097 Verify: Yes 00:07:16.097 00:07:16.097 Running for 1 seconds... 00:07:16.097 00:07:16.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.097 ------------------------------------------------------------------------------------ 00:07:16.097 0,0 526944/s 2058 MiB/s 0 0 00:07:16.097 ==================================================================================== 00:07:16.097 Total 526944/s 2058 MiB/s 0 0' 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:16.097 11:01:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.097 11:01:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.097 11:01:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.097 11:01:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.097 11:01:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.097 11:01:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.097 11:01:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.097 11:01:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.097 11:01:36 -- accel/accel.sh@42 -- # jq -r . 00:07:16.097 [2024-12-13 11:01:36.329771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.097 [2024-12-13 11:01:36.329851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463095 ] 00:07:16.097 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.097 [2024-12-13 11:01:36.381673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.097 [2024-12-13 11:01:36.443787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=0x1 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=xor 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=2 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=software 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=32 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=32 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=1 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val=Yes 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:16.097 11:01:36 -- accel/accel.sh@21 -- # val= 00:07:16.097 11:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # IFS=: 00:07:16.097 11:01:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@21 -- # val= 00:07:17.475 11:01:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@21 -- # val= 00:07:17.475 11:01:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@21 -- # val= 00:07:17.475 11:01:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@21 -- # val= 00:07:17.475 11:01:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@21 -- # val= 00:07:17.475 11:01:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@21 -- # val= 00:07:17.475 11:01:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.475 11:01:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.475 11:01:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.475 11:01:37 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:17.475 11:01:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.475 00:07:17.475 real 0m2.666s 00:07:17.475 user 0m2.452s 00:07:17.475 sys 0m0.222s 00:07:17.475 11:01:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.475 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.475 ************************************ 00:07:17.475 END TEST accel_xor 00:07:17.475 ************************************ 00:07:17.475 11:01:37 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.475 11:01:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.475 11:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.475 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:17.475 ************************************ 00:07:17.475 START TEST accel_xor 00:07:17.475 ************************************ 00:07:17.475 11:01:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.475 11:01:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.475 11:01:37 -- accel/accel.sh@17 -- # local accel_module 00:07:17.475 11:01:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.475 11:01:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.475 11:01:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.475 11:01:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.475 11:01:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.475 11:01:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.475 11:01:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.475 11:01:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.475 11:01:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.475 11:01:37 -- accel/accel.sh@42 -- # jq -r . 00:07:17.475 [2024-12-13 11:01:37.703636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.475 [2024-12-13 11:01:37.703716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463375 ] 00:07:17.475 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.475 [2024-12-13 11:01:37.756537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.475 [2024-12-13 11:01:37.821115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.853 11:01:39 -- accel/accel.sh@18 -- # out=' 00:07:18.853 SPDK Configuration: 00:07:18.853 Core mask: 0x1 00:07:18.853 00:07:18.853 Accel Perf Configuration: 00:07:18.853 Workload Type: xor 00:07:18.853 Source buffers: 3 00:07:18.853 Transfer size: 4096 bytes 00:07:18.853 Vector count 1 00:07:18.853 Module: software 00:07:18.853 Queue depth: 32 00:07:18.853 Allocate depth: 32 00:07:18.853 # threads/core: 1 00:07:18.853 Run time: 1 seconds 00:07:18.853 Verify: Yes 00:07:18.853 00:07:18.853 Running for 1 seconds... 00:07:18.853 00:07:18.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.853 ------------------------------------------------------------------------------------ 00:07:18.853 0,0 495072/s 1933 MiB/s 0 0 00:07:18.853 ==================================================================================== 00:07:18.853 Total 495072/s 1933 MiB/s 0 0' 00:07:18.853 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.853 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.853 11:01:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:18.853 11:01:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:18.853 11:01:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.853 11:01:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.853 11:01:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.853 11:01:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.853 11:01:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.853 11:01:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.853 11:01:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.853 11:01:39 -- accel/accel.sh@42 -- # jq -r . 00:07:18.853 [2024-12-13 11:01:39.037780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.854 [2024-12-13 11:01:39.037859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463639 ] 00:07:18.854 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.854 [2024-12-13 11:01:39.092179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.854 [2024-12-13 11:01:39.154595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=0x1 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=xor 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=3 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=software 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=32 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=32 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=1 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val=Yes 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.854 11:01:39 -- accel/accel.sh@21 -- # val= 00:07:18.854 11:01:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.854 11:01:39 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@21 -- # val= 00:07:19.790 11:01:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@21 -- # val= 00:07:19.790 11:01:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@21 -- # val= 00:07:19.790 11:01:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@21 -- # val= 00:07:19.790 11:01:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@21 -- # val= 00:07:19.790 11:01:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@21 -- # val= 00:07:19.790 11:01:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.790 11:01:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.790 11:01:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.790 11:01:40 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:19.790 11:01:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.790 00:07:19.790 real 0m2.675s 00:07:19.790 user 0m2.459s 00:07:19.790 sys 0m0.222s 00:07:19.790 11:01:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.790 11:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.790 ************************************ 00:07:19.790 END TEST accel_xor 00:07:19.790 ************************************ 00:07:20.048 11:01:40 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:20.048 11:01:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:20.048 11:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.048 11:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:20.048 ************************************ 00:07:20.048 START TEST accel_dif_verify 00:07:20.048 ************************************ 00:07:20.049 11:01:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:20.049 11:01:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.049 11:01:40 -- accel/accel.sh@17 -- # local accel_module 00:07:20.049 11:01:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:20.049 11:01:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:20.049 11:01:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.049 11:01:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.049 11:01:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.049 11:01:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.049 11:01:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.049 11:01:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.049 11:01:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.049 11:01:40 -- accel/accel.sh@42 -- # jq -r . 00:07:20.049 [2024-12-13 11:01:40.417447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.049 [2024-12-13 11:01:40.417524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463930 ] 00:07:20.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.049 [2024-12-13 11:01:40.470493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.049 [2024-12-13 11:01:40.535432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.426 11:01:41 -- accel/accel.sh@18 -- # out=' 00:07:21.426 SPDK Configuration: 00:07:21.426 Core mask: 0x1 00:07:21.426 00:07:21.426 Accel Perf Configuration: 00:07:21.426 Workload Type: dif_verify 00:07:21.426 Vector size: 4096 bytes 00:07:21.426 Transfer size: 4096 bytes 00:07:21.426 Block size: 512 bytes 00:07:21.426 Metadata size: 8 bytes 00:07:21.426 Vector count 1 00:07:21.426 Module: software 00:07:21.426 Queue depth: 32 00:07:21.426 Allocate depth: 32 00:07:21.426 # threads/core: 1 00:07:21.426 Run time: 1 seconds 00:07:21.426 Verify: No 00:07:21.426 00:07:21.426 Running for 1 seconds... 00:07:21.426 00:07:21.426 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.426 ------------------------------------------------------------------------------------ 00:07:21.426 0,0 143936/s 571 MiB/s 0 0 00:07:21.426 ==================================================================================== 00:07:21.426 Total 143936/s 562 MiB/s 0 0' 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:21.426 11:01:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:21.426 11:01:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.426 11:01:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.426 11:01:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.426 11:01:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.426 11:01:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.426 11:01:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.426 11:01:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.426 11:01:41 -- accel/accel.sh@42 -- # jq -r . 00:07:21.426 [2024-12-13 11:01:41.751414] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.426 [2024-12-13 11:01:41.751492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464197 ] 00:07:21.426 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.426 [2024-12-13 11:01:41.803751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.426 [2024-12-13 11:01:41.865322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val=0x1 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val=dif_verify 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.426 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.426 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.426 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val=software 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val=32 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val=32 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val=1 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val=No 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.427 11:01:41 -- accel/accel.sh@21 -- # val= 00:07:21.427 11:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.427 11:01:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@21 -- # val= 00:07:22.804 11:01:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@21 -- # val= 00:07:22.804 11:01:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@21 -- # val= 00:07:22.804 11:01:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@21 -- # val= 00:07:22.804 11:01:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@21 -- # val= 00:07:22.804 11:01:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@21 -- # val= 00:07:22.804 11:01:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.804 11:01:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.804 11:01:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.804 11:01:43 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:22.804 11:01:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.804 00:07:22.804 real 0m2.669s 00:07:22.804 user 0m2.463s 00:07:22.804 sys 0m0.215s 00:07:22.804 11:01:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.804 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.805 ************************************ 00:07:22.805 END TEST accel_dif_verify 00:07:22.805 ************************************ 00:07:22.805 11:01:43 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:22.805 11:01:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:22.805 11:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.805 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.805 ************************************ 00:07:22.805 START TEST accel_dif_generate 00:07:22.805 ************************************ 00:07:22.805 11:01:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:22.805 11:01:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.805 11:01:43 -- accel/accel.sh@17 -- # local accel_module 00:07:22.805 11:01:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:22.805 11:01:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:22.805 11:01:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.805 11:01:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.805 11:01:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.805 11:01:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.805 11:01:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.805 11:01:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.805 11:01:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.805 11:01:43 -- accel/accel.sh@42 -- # jq -r . 00:07:22.805 [2024-12-13 11:01:43.125719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.805 [2024-12-13 11:01:43.125796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464478 ] 00:07:22.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.805 [2024-12-13 11:01:43.178241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.805 [2024-12-13 11:01:43.241163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.183 11:01:44 -- accel/accel.sh@18 -- # out=' 00:07:24.183 SPDK Configuration: 00:07:24.183 Core mask: 0x1 00:07:24.183 00:07:24.183 Accel Perf Configuration: 00:07:24.183 Workload Type: dif_generate 00:07:24.183 Vector size: 4096 bytes 00:07:24.183 Transfer size: 4096 bytes 00:07:24.183 Block size: 512 bytes 00:07:24.183 Metadata size: 8 bytes 00:07:24.183 Vector count 1 00:07:24.183 Module: software 00:07:24.183 Queue depth: 32 00:07:24.183 Allocate depth: 32 00:07:24.183 # threads/core: 1 00:07:24.183 Run time: 1 seconds 00:07:24.183 Verify: No 00:07:24.183 00:07:24.183 Running for 1 seconds... 00:07:24.183 00:07:24.183 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.183 ------------------------------------------------------------------------------------ 00:07:24.183 0,0 175072/s 694 MiB/s 0 0 00:07:24.183 ==================================================================================== 00:07:24.183 Total 175072/s 683 MiB/s 0 0' 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:24.183 11:01:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:24.183 11:01:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.183 11:01:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.183 11:01:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.183 11:01:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.183 11:01:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.183 11:01:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.183 11:01:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.183 11:01:44 -- accel/accel.sh@42 -- # jq -r . 00:07:24.183 [2024-12-13 11:01:44.457863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.183 [2024-12-13 11:01:44.457934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464711 ] 00:07:24.183 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.183 [2024-12-13 11:01:44.512054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.183 [2024-12-13 11:01:44.576560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=0x1 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=dif_generate 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=software 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=32 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=32 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=1 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val=No 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:24.183 11:01:44 -- accel/accel.sh@21 -- # val= 00:07:24.183 11:01:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # IFS=: 00:07:24.183 11:01:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@21 -- # val= 00:07:25.561 11:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@21 -- # val= 00:07:25.561 11:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@21 -- # val= 00:07:25.561 11:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@21 -- # val= 00:07:25.561 11:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@21 -- # val= 00:07:25.561 11:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@21 -- # val= 00:07:25.561 11:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.561 11:01:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.561 11:01:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.561 11:01:45 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:25.561 11:01:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.561 00:07:25.561 real 0m2.671s 00:07:25.561 user 0m2.458s 00:07:25.561 sys 0m0.224s 00:07:25.561 11:01:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.561 11:01:45 -- common/autotest_common.sh@10 -- # set +x 00:07:25.561 ************************************ 00:07:25.561 END TEST accel_dif_generate 00:07:25.561 ************************************ 00:07:25.561 11:01:45 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:25.561 11:01:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:25.561 11:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.561 11:01:45 -- common/autotest_common.sh@10 -- # set +x 00:07:25.561 ************************************ 00:07:25.561 START TEST accel_dif_generate_copy 00:07:25.561 ************************************ 00:07:25.561 11:01:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:25.561 11:01:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.561 11:01:45 -- accel/accel.sh@17 -- # local accel_module 00:07:25.561 11:01:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:25.561 11:01:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:25.561 11:01:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.561 11:01:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.561 11:01:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.561 11:01:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.561 11:01:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.561 11:01:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.561 11:01:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.561 11:01:45 -- accel/accel.sh@42 -- # jq -r . 00:07:25.561 [2024-12-13 11:01:45.836212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.561 [2024-12-13 11:01:45.836292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464960 ] 00:07:25.561 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.561 [2024-12-13 11:01:45.889711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.561 [2024-12-13 11:01:45.954241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.938 11:01:47 -- accel/accel.sh@18 -- # out=' 00:07:26.938 SPDK Configuration: 00:07:26.938 Core mask: 0x1 00:07:26.938 00:07:26.938 Accel Perf Configuration: 00:07:26.938 Workload Type: dif_generate_copy 00:07:26.938 Vector size: 4096 bytes 00:07:26.938 Transfer size: 4096 bytes 00:07:26.938 Vector count 1 00:07:26.938 Module: software 00:07:26.938 Queue depth: 32 00:07:26.938 Allocate depth: 32 00:07:26.938 # threads/core: 1 00:07:26.938 Run time: 1 seconds 00:07:26.938 Verify: No 00:07:26.938 00:07:26.938 Running for 1 seconds... 00:07:26.938 00:07:26.938 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.938 ------------------------------------------------------------------------------------ 00:07:26.938 0,0 135648/s 538 MiB/s 0 0 00:07:26.938 ==================================================================================== 00:07:26.938 Total 135648/s 529 MiB/s 0 0' 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:26.938 11:01:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:26.938 11:01:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.938 11:01:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.938 11:01:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.938 11:01:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.938 11:01:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.938 11:01:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.938 11:01:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.938 11:01:47 -- accel/accel.sh@42 -- # jq -r . 00:07:26.938 [2024-12-13 11:01:47.168398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.938 [2024-12-13 11:01:47.168458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465167 ] 00:07:26.938 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.938 [2024-12-13 11:01:47.220061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.938 [2024-12-13 11:01:47.282881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val=0x1 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.938 11:01:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.938 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.938 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val=software 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val=32 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val=32 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val=1 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val=No 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.939 11:01:47 -- accel/accel.sh@21 -- # val= 00:07:26.939 11:01:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.939 11:01:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@21 -- # val= 00:07:28.317 11:01:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@21 -- # val= 00:07:28.317 11:01:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@21 -- # val= 00:07:28.317 11:01:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@21 -- # val= 00:07:28.317 11:01:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@21 -- # val= 00:07:28.317 11:01:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@21 -- # val= 00:07:28.317 11:01:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # IFS=: 00:07:28.317 11:01:48 -- accel/accel.sh@20 -- # read -r var val 00:07:28.317 11:01:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.317 11:01:48 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:28.317 11:01:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.317 00:07:28.317 real 0m2.666s 00:07:28.317 user 0m2.455s 00:07:28.317 sys 0m0.221s 00:07:28.317 11:01:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.317 11:01:48 -- common/autotest_common.sh@10 -- # set +x 00:07:28.317 ************************************ 00:07:28.317 END TEST accel_dif_generate_copy 00:07:28.317 ************************************ 00:07:28.317 11:01:48 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:28.317 11:01:48 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.317 11:01:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:28.317 11:01:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.317 11:01:48 -- common/autotest_common.sh@10 -- # set +x 00:07:28.317 ************************************ 00:07:28.317 START TEST accel_comp 00:07:28.317 ************************************ 00:07:28.317 11:01:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.317 11:01:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.317 11:01:48 -- accel/accel.sh@17 -- # local accel_module 00:07:28.317 11:01:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.317 11:01:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.317 11:01:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.317 11:01:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.317 11:01:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.317 11:01:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.317 11:01:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.317 11:01:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.317 11:01:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.317 11:01:48 -- accel/accel.sh@42 -- # jq -r . 00:07:28.317 [2024-12-13 11:01:48.543397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.317 [2024-12-13 11:01:48.543476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465406 ] 00:07:28.317 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.317 [2024-12-13 11:01:48.596277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.317 [2024-12-13 11:01:48.660819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.698 11:01:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.698 00:07:29.698 SPDK Configuration: 00:07:29.698 Core mask: 0x1 00:07:29.698 00:07:29.698 Accel Perf Configuration: 00:07:29.698 Workload Type: compress 00:07:29.698 Transfer size: 4096 bytes 00:07:29.698 Vector count 1 00:07:29.698 Module: software 00:07:29.698 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.698 Queue depth: 32 00:07:29.698 Allocate depth: 32 00:07:29.698 # threads/core: 1 00:07:29.698 Run time: 1 seconds 00:07:29.698 Verify: No 00:07:29.698 00:07:29.698 Running for 1 seconds... 00:07:29.698 00:07:29.698 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.698 ------------------------------------------------------------------------------------ 00:07:29.698 0,0 68672/s 286 MiB/s 0 0 00:07:29.698 ==================================================================================== 00:07:29.698 Total 68672/s 268 MiB/s 0 0' 00:07:29.698 11:01:49 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:49 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.698 11:01:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.698 11:01:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.698 11:01:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.698 11:01:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.698 11:01:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.698 11:01:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.698 11:01:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.698 11:01:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.698 11:01:49 -- accel/accel.sh@42 -- # jq -r . 00:07:29.698 [2024-12-13 11:01:49.877347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.698 [2024-12-13 11:01:49.877414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465611 ] 00:07:29.698 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.698 [2024-12-13 11:01:49.930123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.698 [2024-12-13 11:01:49.993204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=0x1 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=compress 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=software 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=32 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=32 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=1 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val=No 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.698 11:01:50 -- accel/accel.sh@21 -- # val= 00:07:29.698 11:01:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # IFS=: 00:07:29.698 11:01:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@21 -- # val= 00:07:30.635 11:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@21 -- # val= 00:07:30.635 11:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@21 -- # val= 00:07:30.635 11:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@21 -- # val= 00:07:30.635 11:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@21 -- # val= 00:07:30.635 11:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@21 -- # val= 00:07:30.635 11:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.635 11:01:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.635 11:01:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.635 11:01:51 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:30.635 11:01:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.635 00:07:30.635 real 0m2.677s 00:07:30.635 user 0m2.459s 00:07:30.635 sys 0m0.227s 00:07:30.635 11:01:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.635 11:01:51 -- common/autotest_common.sh@10 -- # set +x 00:07:30.635 ************************************ 00:07:30.635 END TEST accel_comp 00:07:30.635 ************************************ 00:07:30.894 11:01:51 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:30.894 11:01:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:30.894 11:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.894 11:01:51 -- common/autotest_common.sh@10 -- # set +x 00:07:30.894 ************************************ 00:07:30.894 START TEST accel_decomp 00:07:30.894 ************************************ 00:07:30.894 11:01:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:30.894 11:01:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.894 11:01:51 -- accel/accel.sh@17 -- # local accel_module 00:07:30.894 11:01:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:30.894 11:01:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:30.894 11:01:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.894 11:01:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.894 11:01:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.894 11:01:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.894 11:01:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.894 11:01:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.894 11:01:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.894 11:01:51 -- accel/accel.sh@42 -- # jq -r . 00:07:30.894 [2024-12-13 11:01:51.259623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.894 [2024-12-13 11:01:51.259701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465882 ] 00:07:30.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.894 [2024-12-13 11:01:51.313545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.894 [2024-12-13 11:01:51.379061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.273 11:01:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.273 00:07:32.273 SPDK Configuration: 00:07:32.273 Core mask: 0x1 00:07:32.273 00:07:32.273 Accel Perf Configuration: 00:07:32.273 Workload Type: decompress 00:07:32.273 Transfer size: 4096 bytes 00:07:32.273 Vector count 1 00:07:32.273 Module: software 00:07:32.273 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:32.273 Queue depth: 32 00:07:32.273 Allocate depth: 32 00:07:32.273 # threads/core: 1 00:07:32.273 Run time: 1 seconds 00:07:32.273 Verify: Yes 00:07:32.273 00:07:32.273 Running for 1 seconds... 00:07:32.273 00:07:32.273 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.273 ------------------------------------------------------------------------------------ 00:07:32.273 0,0 88352/s 162 MiB/s 0 0 00:07:32.273 ==================================================================================== 00:07:32.273 Total 88352/s 345 MiB/s 0 0' 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:32.273 11:01:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:32.273 11:01:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.273 11:01:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.273 11:01:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.273 11:01:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.273 11:01:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.273 11:01:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.273 11:01:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.273 11:01:52 -- accel/accel.sh@42 -- # jq -r . 00:07:32.273 [2024-12-13 11:01:52.596659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.273 [2024-12-13 11:01:52.596738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466150 ] 00:07:32.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.273 [2024-12-13 11:01:52.648760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.273 [2024-12-13 11:01:52.711330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=0x1 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=decompress 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=software 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=32 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=32 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=1 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val=Yes 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.273 11:01:52 -- accel/accel.sh@21 -- # val= 00:07:32.273 11:01:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.273 11:01:52 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@21 -- # val= 00:07:33.652 11:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@21 -- # val= 00:07:33.652 11:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@21 -- # val= 00:07:33.652 11:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@21 -- # val= 00:07:33.652 11:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@21 -- # val= 00:07:33.652 11:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@21 -- # val= 00:07:33.652 11:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.652 11:01:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.652 11:01:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.652 11:01:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.652 11:01:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.652 00:07:33.652 real 0m2.675s 00:07:33.652 user 0m2.464s 00:07:33.652 sys 0m0.221s 00:07:33.652 11:01:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.652 11:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:33.652 ************************************ 00:07:33.652 END TEST accel_decomp 00:07:33.652 ************************************ 00:07:33.652 11:01:53 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.652 11:01:53 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:33.652 11:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.652 11:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:33.652 ************************************ 00:07:33.652 START TEST accel_decmop_full 00:07:33.652 ************************************ 00:07:33.652 11:01:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.652 11:01:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.652 11:01:53 -- accel/accel.sh@17 -- # local accel_module 00:07:33.652 11:01:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.652 11:01:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:33.652 11:01:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.652 11:01:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.652 11:01:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.652 11:01:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.652 11:01:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.652 11:01:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.652 11:01:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.652 11:01:53 -- accel/accel.sh@42 -- # jq -r . 00:07:33.652 [2024-12-13 11:01:53.973194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.652 [2024-12-13 11:01:53.973281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466434 ] 00:07:33.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.652 [2024-12-13 11:01:54.026412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.652 [2024-12-13 11:01:54.090346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.031 11:01:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.031 00:07:35.031 SPDK Configuration: 00:07:35.031 Core mask: 0x1 00:07:35.031 00:07:35.031 Accel Perf Configuration: 00:07:35.031 Workload Type: decompress 00:07:35.031 Transfer size: 111250 bytes 00:07:35.031 Vector count 1 00:07:35.031 Module: software 00:07:35.031 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:35.031 Queue depth: 32 00:07:35.031 Allocate depth: 32 00:07:35.031 # threads/core: 1 00:07:35.031 Run time: 1 seconds 00:07:35.031 Verify: Yes 00:07:35.031 00:07:35.031 Running for 1 seconds... 00:07:35.031 00:07:35.031 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.031 ------------------------------------------------------------------------------------ 00:07:35.031 0,0 6048/s 249 MiB/s 0 0 00:07:35.031 ==================================================================================== 00:07:35.031 Total 6048/s 641 MiB/s 0 0' 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:35.031 11:01:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:35.031 11:01:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.031 11:01:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.031 11:01:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.031 11:01:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.031 11:01:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.031 11:01:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.031 11:01:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.031 11:01:55 -- accel/accel.sh@42 -- # jq -r . 00:07:35.031 [2024-12-13 11:01:55.314534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.031 [2024-12-13 11:01:55.314601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466700 ] 00:07:35.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.031 [2024-12-13 11:01:55.366653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.031 [2024-12-13 11:01:55.428911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=0x1 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=decompress 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=software 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=32 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=32 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=1 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val=Yes 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.031 11:01:55 -- accel/accel.sh@21 -- # val= 00:07:35.031 11:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # IFS=: 00:07:35.031 11:01:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@21 -- # val= 00:07:36.409 11:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@21 -- # val= 00:07:36.409 11:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@21 -- # val= 00:07:36.409 11:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@21 -- # val= 00:07:36.409 11:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@21 -- # val= 00:07:36.409 11:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@21 -- # val= 00:07:36.409 11:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.409 11:01:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.409 11:01:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.409 11:01:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.409 11:01:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.409 00:07:36.409 real 0m2.687s 00:07:36.409 user 0m2.472s 00:07:36.409 sys 0m0.221s 00:07:36.409 11:01:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.409 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.409 ************************************ 00:07:36.409 END TEST accel_decmop_full 00:07:36.409 ************************************ 00:07:36.409 11:01:56 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.409 11:01:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:36.409 11:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.409 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.409 ************************************ 00:07:36.409 START TEST accel_decomp_mcore 00:07:36.409 ************************************ 00:07:36.410 11:01:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.410 11:01:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.410 11:01:56 -- accel/accel.sh@17 -- # local accel_module 00:07:36.410 11:01:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.410 11:01:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:36.410 11:01:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.410 11:01:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.410 11:01:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.410 11:01:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.410 11:01:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.410 11:01:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.410 11:01:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.410 11:01:56 -- accel/accel.sh@42 -- # jq -r . 00:07:36.410 [2024-12-13 11:01:56.698344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.410 [2024-12-13 11:01:56.698406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466982 ] 00:07:36.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.410 [2024-12-13 11:01:56.751209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.410 [2024-12-13 11:01:56.817388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.410 [2024-12-13 11:01:56.817481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.410 [2024-12-13 11:01:56.817552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.410 [2024-12-13 11:01:56.817554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.789 11:01:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.789 00:07:37.789 SPDK Configuration: 00:07:37.789 Core mask: 0xf 00:07:37.789 00:07:37.789 Accel Perf Configuration: 00:07:37.789 Workload Type: decompress 00:07:37.789 Transfer size: 4096 bytes 00:07:37.789 Vector count 1 00:07:37.789 Module: software 00:07:37.789 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:37.789 Queue depth: 32 00:07:37.789 Allocate depth: 32 00:07:37.789 # threads/core: 1 00:07:37.789 Run time: 1 seconds 00:07:37.789 Verify: Yes 00:07:37.789 00:07:37.789 Running for 1 seconds... 00:07:37.789 00:07:37.789 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.789 ------------------------------------------------------------------------------------ 00:07:37.789 0,0 70848/s 130 MiB/s 0 0 00:07:37.789 3,0 74560/s 137 MiB/s 0 0 00:07:37.789 2,0 74432/s 137 MiB/s 0 0 00:07:37.789 1,0 74528/s 137 MiB/s 0 0 00:07:37.789 ==================================================================================== 00:07:37.789 Total 294368/s 1149 MiB/s 0 0' 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:37.789 11:01:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:37.789 11:01:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.789 11:01:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.789 11:01:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.789 11:01:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.789 11:01:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.789 11:01:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.789 11:01:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.789 11:01:58 -- accel/accel.sh@42 -- # jq -r . 00:07:37.789 [2024-12-13 11:01:58.041255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.789 [2024-12-13 11:01:58.041339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467257 ] 00:07:37.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.789 [2024-12-13 11:01:58.093778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.789 [2024-12-13 11:01:58.158194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.789 [2024-12-13 11:01:58.158293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.789 [2024-12-13 11:01:58.158361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.789 [2024-12-13 11:01:58.158363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=0xf 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=decompress 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=software 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=32 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=32 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=1 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val=Yes 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.789 11:01:58 -- accel/accel.sh@21 -- # val= 00:07:37.789 11:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:37.789 11:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@21 -- # val= 00:07:39.167 11:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:39.167 11:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.167 11:01:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.167 11:01:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.167 11:01:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.167 00:07:39.167 real 0m2.690s 00:07:39.167 user 0m9.103s 00:07:39.167 sys 0m0.238s 00:07:39.167 11:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.167 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 ************************************ 00:07:39.167 END TEST accel_decomp_mcore 00:07:39.167 ************************************ 00:07:39.167 11:01:59 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.167 11:01:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:39.167 11:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.167 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 ************************************ 00:07:39.167 START TEST accel_decomp_full_mcore 00:07:39.167 ************************************ 00:07:39.167 11:01:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.167 11:01:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.167 11:01:59 -- accel/accel.sh@17 -- # local accel_module 00:07:39.167 11:01:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.167 11:01:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.167 11:01:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.167 11:01:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.167 11:01:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.167 11:01:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.167 11:01:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.167 11:01:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.167 11:01:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.167 11:01:59 -- accel/accel.sh@42 -- # jq -r . 00:07:39.167 [2024-12-13 11:01:59.427392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.167 [2024-12-13 11:01:59.427464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467539 ] 00:07:39.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.167 [2024-12-13 11:01:59.482411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.167 [2024-12-13 11:01:59.546109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.167 [2024-12-13 11:01:59.546205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.167 [2024-12-13 11:01:59.546258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.167 [2024-12-13 11:01:59.546259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.544 11:02:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.544 00:07:40.544 SPDK Configuration: 00:07:40.544 Core mask: 0xf 00:07:40.544 00:07:40.544 Accel Perf Configuration: 00:07:40.544 Workload Type: decompress 00:07:40.544 Transfer size: 111250 bytes 00:07:40.544 Vector count 1 00:07:40.544 Module: software 00:07:40.544 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.544 Queue depth: 32 00:07:40.544 Allocate depth: 32 00:07:40.544 # threads/core: 1 00:07:40.544 Run time: 1 seconds 00:07:40.544 Verify: Yes 00:07:40.544 00:07:40.544 Running for 1 seconds... 00:07:40.544 00:07:40.544 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.544 ------------------------------------------------------------------------------------ 00:07:40.544 0,0 5696/s 235 MiB/s 0 0 00:07:40.544 3,0 6016/s 248 MiB/s 0 0 00:07:40.544 2,0 6016/s 248 MiB/s 0 0 00:07:40.544 1,0 6016/s 248 MiB/s 0 0 00:07:40.544 ==================================================================================== 00:07:40.544 Total 23744/s 2519 MiB/s 0 0' 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.544 11:02:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.544 11:02:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.544 11:02:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.544 11:02:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.544 11:02:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.544 11:02:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.544 11:02:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.544 11:02:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.544 11:02:00 -- accel/accel.sh@42 -- # jq -r . 00:07:40.544 [2024-12-13 11:02:00.781737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.544 [2024-12-13 11:02:00.781804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467808 ] 00:07:40.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.544 [2024-12-13 11:02:00.835304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.544 [2024-12-13 11:02:00.901780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.544 [2024-12-13 11:02:00.901877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.544 [2024-12-13 11:02:00.901948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.544 [2024-12-13 11:02:00.901950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val=0xf 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val=decompress 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val=software 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.544 11:02:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.544 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.544 11:02:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.544 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val=32 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val=32 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val=1 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val=Yes 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.545 11:02:00 -- accel/accel.sh@21 -- # val= 00:07:40.545 11:02:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:40.545 11:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.920 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.920 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.920 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.920 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.920 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.920 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.920 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.920 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.920 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@21 -- # val= 00:07:41.921 11:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:41.921 11:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:41.921 11:02:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.921 11:02:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:41.921 11:02:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.921 00:07:41.921 real 0m2.716s 00:07:41.921 user 0m9.180s 00:07:41.921 sys 0m0.244s 00:07:41.921 11:02:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.921 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:07:41.921 ************************************ 00:07:41.921 END TEST accel_decomp_full_mcore 00:07:41.921 ************************************ 00:07:41.921 11:02:02 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.921 11:02:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:41.921 11:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.921 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:07:41.921 ************************************ 00:07:41.921 START TEST accel_decomp_mthread 00:07:41.921 ************************************ 00:07:41.921 11:02:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.921 11:02:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.921 11:02:02 -- accel/accel.sh@17 -- # local accel_module 00:07:41.921 11:02:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.921 11:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.921 11:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.921 11:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.921 11:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.921 11:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.921 11:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.921 11:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.921 11:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.921 11:02:02 -- accel/accel.sh@42 -- # jq -r . 00:07:41.921 [2024-12-13 11:02:02.183560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.921 [2024-12-13 11:02:02.183639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468099 ] 00:07:41.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.921 [2024-12-13 11:02:02.236880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.921 [2024-12-13 11:02:02.300708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.299 11:02:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.299 00:07:43.299 SPDK Configuration: 00:07:43.299 Core mask: 0x1 00:07:43.299 00:07:43.299 Accel Perf Configuration: 00:07:43.299 Workload Type: decompress 00:07:43.299 Transfer size: 4096 bytes 00:07:43.299 Vector count 1 00:07:43.299 Module: software 00:07:43.299 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:43.299 Queue depth: 32 00:07:43.299 Allocate depth: 32 00:07:43.299 # threads/core: 2 00:07:43.299 Run time: 1 seconds 00:07:43.299 Verify: Yes 00:07:43.299 00:07:43.299 Running for 1 seconds... 00:07:43.299 00:07:43.299 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.299 ------------------------------------------------------------------------------------ 00:07:43.299 0,1 44864/s 82 MiB/s 0 0 00:07:43.299 0,0 44768/s 82 MiB/s 0 0 00:07:43.299 ==================================================================================== 00:07:43.299 Total 89632/s 350 MiB/s 0 0' 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:43.299 11:02:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:43.299 11:02:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.299 11:02:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.299 11:02:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.299 11:02:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.299 11:02:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.299 11:02:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.299 11:02:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.299 11:02:03 -- accel/accel.sh@42 -- # jq -r . 00:07:43.299 [2024-12-13 11:02:03.521265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.299 [2024-12-13 11:02:03.521351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468365 ] 00:07:43.299 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.299 [2024-12-13 11:02:03.573837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.299 [2024-12-13 11:02:03.635813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=0x1 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=decompress 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=software 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=32 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=32 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=2 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val=Yes 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.299 11:02:03 -- accel/accel.sh@21 -- # val= 00:07:43.299 11:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:43.299 11:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@21 -- # val= 00:07:44.677 11:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.677 11:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.677 11:02:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.677 11:02:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.677 11:02:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.677 00:07:44.677 real 0m2.681s 00:07:44.677 user 0m2.456s 00:07:44.677 sys 0m0.234s 00:07:44.677 11:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.677 11:02:04 -- common/autotest_common.sh@10 -- # set +x 00:07:44.677 ************************************ 00:07:44.677 END TEST accel_decomp_mthread 00:07:44.678 ************************************ 00:07:44.678 11:02:04 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.678 11:02:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:44.678 11:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.678 11:02:04 -- common/autotest_common.sh@10 -- # set +x 00:07:44.678 ************************************ 00:07:44.678 START TEST accel_deomp_full_mthread 00:07:44.678 ************************************ 00:07:44.678 11:02:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.678 11:02:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.678 11:02:04 -- accel/accel.sh@17 -- # local accel_module 00:07:44.678 11:02:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.678 11:02:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.678 11:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.678 11:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.678 11:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.678 11:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.678 11:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.678 11:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.678 11:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.678 11:02:04 -- accel/accel.sh@42 -- # jq -r . 00:07:44.678 [2024-12-13 11:02:04.901001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.678 [2024-12-13 11:02:04.901065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468649 ] 00:07:44.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.678 [2024-12-13 11:02:04.952832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.678 [2024-12-13 11:02:05.017019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.056 11:02:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:46.056 00:07:46.056 SPDK Configuration: 00:07:46.056 Core mask: 0x1 00:07:46.056 00:07:46.056 Accel Perf Configuration: 00:07:46.056 Workload Type: decompress 00:07:46.056 Transfer size: 111250 bytes 00:07:46.056 Vector count 1 00:07:46.056 Module: software 00:07:46.056 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:46.056 Queue depth: 32 00:07:46.056 Allocate depth: 32 00:07:46.056 # threads/core: 2 00:07:46.056 Run time: 1 seconds 00:07:46.056 Verify: Yes 00:07:46.056 00:07:46.056 Running for 1 seconds... 00:07:46.056 00:07:46.056 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.056 ------------------------------------------------------------------------------------ 00:07:46.056 0,1 3072/s 126 MiB/s 0 0 00:07:46.056 0,0 3072/s 126 MiB/s 0 0 00:07:46.056 ==================================================================================== 00:07:46.056 Total 6144/s 651 MiB/s 0 0' 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.056 11:02:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.056 11:02:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.056 11:02:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.056 11:02:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.056 11:02:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.056 11:02:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.056 11:02:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.056 11:02:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.056 11:02:06 -- accel/accel.sh@42 -- # jq -r . 00:07:46.056 [2024-12-13 11:02:06.255680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.056 [2024-12-13 11:02:06.255739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468892 ] 00:07:46.056 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.056 [2024-12-13 11:02:06.307115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.056 [2024-12-13 11:02:06.369866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=0x1 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=decompress 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=software 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=32 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=32 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=2 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val=Yes 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.056 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.056 11:02:06 -- accel/accel.sh@21 -- # val= 00:07:46.056 11:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.057 11:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:46.057 11:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@21 -- # val= 00:07:47.434 11:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:47.434 11:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:47.434 11:02:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.434 11:02:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.434 11:02:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.434 00:07:47.434 real 0m2.715s 00:07:47.434 user 0m2.506s 00:07:47.434 sys 0m0.216s 00:07:47.434 11:02:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.434 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.434 ************************************ 00:07:47.434 END TEST accel_deomp_full_mthread 00:07:47.434 ************************************ 00:07:47.434 11:02:07 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:47.434 11:02:07 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:47.434 11:02:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:47.434 11:02:07 -- accel/accel.sh@129 -- # build_accel_config 00:07:47.434 11:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.434 11:02:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.434 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.434 11:02:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.434 11:02:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.434 11:02:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.434 11:02:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.434 11:02:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.434 11:02:07 -- accel/accel.sh@42 -- # jq -r . 00:07:47.434 ************************************ 00:07:47.434 START TEST accel_dif_functional_tests 00:07:47.434 ************************************ 00:07:47.434 11:02:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:47.434 [2024-12-13 11:02:07.657932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.434 [2024-12-13 11:02:07.657980] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469151 ] 00:07:47.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.434 [2024-12-13 11:02:07.703150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.434 [2024-12-13 11:02:07.770104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.434 [2024-12-13 11:02:07.770198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.434 [2024-12-13 11:02:07.770198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.434 00:07:47.434 00:07:47.434 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.434 http://cunit.sourceforge.net/ 00:07:47.434 00:07:47.434 00:07:47.434 Suite: accel_dif 00:07:47.434 Test: verify: DIF generated, GUARD check ...passed 00:07:47.434 Test: verify: DIF generated, APPTAG check ...passed 00:07:47.434 Test: verify: DIF generated, REFTAG check ...passed 00:07:47.434 Test: verify: DIF not generated, GUARD check ...[2024-12-13 11:02:07.836467] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.434 [2024-12-13 11:02:07.836508] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:47.434 passed 00:07:47.434 Test: verify: DIF not generated, APPTAG check ...[2024-12-13 11:02:07.836537] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.434 [2024-12-13 11:02:07.836551] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:47.434 passed 00:07:47.434 Test: verify: DIF not generated, REFTAG check ...[2024-12-13 11:02:07.836566] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.434 [2024-12-13 11:02:07.836579] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:47.434 passed 00:07:47.434 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:47.434 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-13 11:02:07.836617] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:47.434 passed 00:07:47.434 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:47.434 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:47.434 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:47.434 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-13 11:02:07.836709] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:47.434 passed 00:07:47.434 Test: generate copy: DIF generated, GUARD check ...passed 00:07:47.434 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:47.435 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:47.435 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:47.435 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:47.435 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:47.435 Test: generate copy: iovecs-len validate ...[2024-12-13 11:02:07.836856] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:47.435 passed 00:07:47.435 Test: generate copy: buffer alignment validate ...passed 00:07:47.435 00:07:47.435 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.435 suites 1 1 n/a 0 0 00:07:47.435 tests 20 20 20 0 0 00:07:47.435 asserts 204 204 204 0 n/a 00:07:47.435 00:07:47.435 Elapsed time = 0.000 seconds 00:07:47.694 00:07:47.694 real 0m0.384s 00:07:47.694 user 0m0.604s 00:07:47.694 sys 0m0.128s 00:07:47.694 11:02:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.694 11:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.694 ************************************ 00:07:47.694 END TEST accel_dif_functional_tests 00:07:47.694 ************************************ 00:07:47.694 00:07:47.694 real 0m57.098s 00:07:47.694 user 1m5.674s 00:07:47.694 sys 0m6.056s 00:07:47.694 11:02:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.694 11:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.694 ************************************ 00:07:47.694 END TEST accel 00:07:47.694 ************************************ 00:07:47.694 11:02:08 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:47.694 11:02:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.694 11:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.694 11:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.694 ************************************ 00:07:47.694 START TEST accel_rpc 00:07:47.694 ************************************ 00:07:47.694 11:02:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:47.694 * Looking for test storage... 00:07:47.694 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:47.694 11:02:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.694 11:02:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.694 11:02:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.694 11:02:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.694 11:02:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.694 11:02:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.694 11:02:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.694 11:02:08 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.694 11:02:08 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.694 11:02:08 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.694 11:02:08 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.694 11:02:08 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.694 11:02:08 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.694 11:02:08 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.694 11:02:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.694 11:02:08 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.694 11:02:08 -- scripts/common.sh@344 -- # : 1 00:07:47.694 11:02:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.694 11:02:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.694 11:02:08 -- scripts/common.sh@364 -- # decimal 1 00:07:47.694 11:02:08 -- scripts/common.sh@352 -- # local d=1 00:07:47.694 11:02:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.694 11:02:08 -- scripts/common.sh@354 -- # echo 1 00:07:47.694 11:02:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.694 11:02:08 -- scripts/common.sh@365 -- # decimal 2 00:07:47.694 11:02:08 -- scripts/common.sh@352 -- # local d=2 00:07:47.694 11:02:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.694 11:02:08 -- scripts/common.sh@354 -- # echo 2 00:07:47.694 11:02:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.694 11:02:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.694 11:02:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.694 11:02:08 -- scripts/common.sh@367 -- # return 0 00:07:47.694 11:02:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.694 11:02:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.694 --rc genhtml_branch_coverage=1 00:07:47.694 --rc genhtml_function_coverage=1 00:07:47.694 --rc genhtml_legend=1 00:07:47.694 --rc geninfo_all_blocks=1 00:07:47.694 --rc geninfo_unexecuted_blocks=1 00:07:47.694 00:07:47.694 ' 00:07:47.694 11:02:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.694 --rc genhtml_branch_coverage=1 00:07:47.694 --rc genhtml_function_coverage=1 00:07:47.694 --rc genhtml_legend=1 00:07:47.694 --rc geninfo_all_blocks=1 00:07:47.694 --rc geninfo_unexecuted_blocks=1 00:07:47.694 00:07:47.694 ' 00:07:47.694 11:02:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.694 --rc genhtml_branch_coverage=1 00:07:47.694 --rc genhtml_function_coverage=1 00:07:47.694 --rc genhtml_legend=1 00:07:47.694 --rc geninfo_all_blocks=1 00:07:47.694 --rc geninfo_unexecuted_blocks=1 00:07:47.694 00:07:47.694 ' 00:07:47.694 11:02:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.694 --rc genhtml_branch_coverage=1 00:07:47.694 --rc genhtml_function_coverage=1 00:07:47.694 --rc genhtml_legend=1 00:07:47.694 --rc geninfo_all_blocks=1 00:07:47.694 --rc geninfo_unexecuted_blocks=1 00:07:47.694 00:07:47.694 ' 00:07:47.694 11:02:08 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.694 11:02:08 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1469276 00:07:47.694 11:02:08 -- accel/accel_rpc.sh@15 -- # waitforlisten 1469276 00:07:47.694 11:02:08 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:47.695 11:02:08 -- common/autotest_common.sh@829 -- # '[' -z 1469276 ']' 00:07:47.695 11:02:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.695 11:02:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.695 11:02:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.695 11:02:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.695 11:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:47.954 [2024-12-13 11:02:08.277008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.954 [2024-12-13 11:02:08.277052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469276 ] 00:07:47.954 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.954 [2024-12-13 11:02:08.326482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.954 [2024-12-13 11:02:08.395872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.954 [2024-12-13 11:02:08.395980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.522 11:02:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.522 11:02:09 -- common/autotest_common.sh@862 -- # return 0 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:48.522 11:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.522 11:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.522 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 ************************************ 00:07:48.522 START TEST accel_assign_opcode 00:07:48.522 ************************************ 00:07:48.522 11:02:09 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:48.522 11:02:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.522 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 [2024-12-13 11:02:09.077902] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:48.522 11:02:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:48.522 11:02:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.522 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 [2024-12-13 11:02:09.085917] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:48.522 11:02:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.522 11:02:09 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:48.522 11:02:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.522 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.781 11:02:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.781 11:02:09 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:48.781 11:02:09 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:48.781 11:02:09 -- accel/accel_rpc.sh@42 -- # grep software 00:07:48.781 11:02:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.781 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.781 11:02:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.781 software 00:07:48.781 00:07:48.781 real 0m0.234s 00:07:48.781 user 0m0.044s 00:07:48.781 sys 0m0.010s 00:07:48.781 11:02:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.781 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.781 ************************************ 00:07:48.781 END TEST accel_assign_opcode 00:07:48.781 ************************************ 00:07:48.781 11:02:09 -- accel/accel_rpc.sh@55 -- # killprocess 1469276 00:07:48.781 11:02:09 -- common/autotest_common.sh@936 -- # '[' -z 1469276 ']' 00:07:48.781 11:02:09 -- common/autotest_common.sh@940 -- # kill -0 1469276 00:07:48.781 11:02:09 -- common/autotest_common.sh@941 -- # uname 00:07:48.781 11:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.781 11:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1469276 00:07:49.040 11:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.040 11:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.040 11:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1469276' 00:07:49.040 killing process with pid 1469276 00:07:49.040 11:02:09 -- common/autotest_common.sh@955 -- # kill 1469276 00:07:49.040 11:02:09 -- common/autotest_common.sh@960 -- # wait 1469276 00:07:49.300 00:07:49.300 real 0m1.616s 00:07:49.300 user 0m1.670s 00:07:49.300 sys 0m0.391s 00:07:49.300 11:02:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.300 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:49.300 ************************************ 00:07:49.300 END TEST accel_rpc 00:07:49.300 ************************************ 00:07:49.300 11:02:09 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:49.300 11:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.300 11:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.300 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:49.300 ************************************ 00:07:49.300 START TEST app_cmdline 00:07:49.300 ************************************ 00:07:49.300 11:02:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:49.300 * Looking for test storage... 00:07:49.300 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:49.300 11:02:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.300 11:02:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.300 11:02:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.559 11:02:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.559 11:02:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.559 11:02:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.559 11:02:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.559 11:02:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.559 11:02:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.559 11:02:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.559 11:02:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.559 11:02:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.559 11:02:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.559 11:02:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.559 11:02:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.559 11:02:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.559 11:02:09 -- scripts/common.sh@344 -- # : 1 00:07:49.559 11:02:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.559 11:02:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.559 11:02:09 -- scripts/common.sh@364 -- # decimal 1 00:07:49.559 11:02:09 -- scripts/common.sh@352 -- # local d=1 00:07:49.559 11:02:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.559 11:02:09 -- scripts/common.sh@354 -- # echo 1 00:07:49.559 11:02:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.559 11:02:09 -- scripts/common.sh@365 -- # decimal 2 00:07:49.559 11:02:09 -- scripts/common.sh@352 -- # local d=2 00:07:49.559 11:02:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.559 11:02:09 -- scripts/common.sh@354 -- # echo 2 00:07:49.559 11:02:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.559 11:02:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.559 11:02:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.559 11:02:09 -- scripts/common.sh@367 -- # return 0 00:07:49.559 11:02:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.559 11:02:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.559 --rc genhtml_branch_coverage=1 00:07:49.559 --rc genhtml_function_coverage=1 00:07:49.559 --rc genhtml_legend=1 00:07:49.559 --rc geninfo_all_blocks=1 00:07:49.559 --rc geninfo_unexecuted_blocks=1 00:07:49.559 00:07:49.559 ' 00:07:49.559 11:02:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.559 --rc genhtml_branch_coverage=1 00:07:49.559 --rc genhtml_function_coverage=1 00:07:49.559 --rc genhtml_legend=1 00:07:49.559 --rc geninfo_all_blocks=1 00:07:49.559 --rc geninfo_unexecuted_blocks=1 00:07:49.559 00:07:49.559 ' 00:07:49.559 11:02:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.559 --rc genhtml_branch_coverage=1 00:07:49.559 --rc genhtml_function_coverage=1 00:07:49.559 --rc genhtml_legend=1 00:07:49.559 --rc geninfo_all_blocks=1 00:07:49.559 --rc geninfo_unexecuted_blocks=1 00:07:49.559 00:07:49.559 ' 00:07:49.559 11:02:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.559 --rc genhtml_branch_coverage=1 00:07:49.559 --rc genhtml_function_coverage=1 00:07:49.559 --rc genhtml_legend=1 00:07:49.559 --rc geninfo_all_blocks=1 00:07:49.559 --rc geninfo_unexecuted_blocks=1 00:07:49.559 00:07:49.559 ' 00:07:49.559 11:02:09 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:49.559 11:02:09 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1469620 00:07:49.559 11:02:09 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:49.559 11:02:09 -- app/cmdline.sh@18 -- # waitforlisten 1469620 00:07:49.559 11:02:09 -- common/autotest_common.sh@829 -- # '[' -z 1469620 ']' 00:07:49.559 11:02:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.559 11:02:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.559 11:02:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.559 11:02:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.559 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:07:49.559 [2024-12-13 11:02:09.961821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.559 [2024-12-13 11:02:09.961866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469620 ] 00:07:49.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.559 [2024-12-13 11:02:10.015206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.559 [2024-12-13 11:02:10.098536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.559 [2024-12-13 11:02:10.098653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.496 11:02:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.496 11:02:10 -- common/autotest_common.sh@862 -- # return 0 00:07:50.496 11:02:10 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:50.496 { 00:07:50.496 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:50.496 "fields": { 00:07:50.496 "major": 24, 00:07:50.496 "minor": 1, 00:07:50.496 "patch": 1, 00:07:50.496 "suffix": "-pre", 00:07:50.496 "commit": "c13c99a5e" 00:07:50.496 } 00:07:50.496 } 00:07:50.496 11:02:10 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:50.496 11:02:10 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:50.496 11:02:10 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:50.496 11:02:10 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:50.496 11:02:10 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:50.496 11:02:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.496 11:02:10 -- common/autotest_common.sh@10 -- # set +x 00:07:50.496 11:02:10 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:50.496 11:02:10 -- app/cmdline.sh@26 -- # sort 00:07:50.496 11:02:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.496 11:02:10 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:50.496 11:02:10 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:50.496 11:02:10 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.496 11:02:10 -- common/autotest_common.sh@650 -- # local es=0 00:07:50.496 11:02:10 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.496 11:02:10 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:50.496 11:02:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.496 11:02:10 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:50.496 11:02:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.496 11:02:10 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:50.496 11:02:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.496 11:02:10 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:50.496 11:02:10 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:50.496 11:02:10 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.755 request: 00:07:50.755 { 00:07:50.755 "method": "env_dpdk_get_mem_stats", 00:07:50.755 "req_id": 1 00:07:50.755 } 00:07:50.755 Got JSON-RPC error response 00:07:50.755 response: 00:07:50.755 { 00:07:50.755 "code": -32601, 00:07:50.755 "message": "Method not found" 00:07:50.755 } 00:07:50.755 11:02:11 -- common/autotest_common.sh@653 -- # es=1 00:07:50.755 11:02:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.755 11:02:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.755 11:02:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.755 11:02:11 -- app/cmdline.sh@1 -- # killprocess 1469620 00:07:50.755 11:02:11 -- common/autotest_common.sh@936 -- # '[' -z 1469620 ']' 00:07:50.755 11:02:11 -- common/autotest_common.sh@940 -- # kill -0 1469620 00:07:50.755 11:02:11 -- common/autotest_common.sh@941 -- # uname 00:07:50.755 11:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.755 11:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1469620 00:07:50.755 11:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.755 11:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.755 11:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1469620' 00:07:50.755 killing process with pid 1469620 00:07:50.755 11:02:11 -- common/autotest_common.sh@955 -- # kill 1469620 00:07:50.755 11:02:11 -- common/autotest_common.sh@960 -- # wait 1469620 00:07:51.014 00:07:51.014 real 0m1.752s 00:07:51.014 user 0m2.033s 00:07:51.014 sys 0m0.448s 00:07:51.014 11:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.014 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:51.014 ************************************ 00:07:51.014 END TEST app_cmdline 00:07:51.014 ************************************ 00:07:51.014 11:02:11 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:51.014 11:02:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.014 11:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.014 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:51.014 ************************************ 00:07:51.014 START TEST version 00:07:51.014 ************************************ 00:07:51.014 11:02:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:51.273 * Looking for test storage... 00:07:51.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:51.273 11:02:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.273 11:02:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.273 11:02:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.273 11:02:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.273 11:02:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.273 11:02:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.273 11:02:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.273 11:02:11 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.273 11:02:11 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.273 11:02:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.273 11:02:11 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.273 11:02:11 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.273 11:02:11 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.273 11:02:11 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.273 11:02:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.273 11:02:11 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.273 11:02:11 -- scripts/common.sh@344 -- # : 1 00:07:51.273 11:02:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.273 11:02:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.273 11:02:11 -- scripts/common.sh@364 -- # decimal 1 00:07:51.273 11:02:11 -- scripts/common.sh@352 -- # local d=1 00:07:51.273 11:02:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.273 11:02:11 -- scripts/common.sh@354 -- # echo 1 00:07:51.273 11:02:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.273 11:02:11 -- scripts/common.sh@365 -- # decimal 2 00:07:51.273 11:02:11 -- scripts/common.sh@352 -- # local d=2 00:07:51.273 11:02:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.273 11:02:11 -- scripts/common.sh@354 -- # echo 2 00:07:51.273 11:02:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.273 11:02:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.273 11:02:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.273 11:02:11 -- scripts/common.sh@367 -- # return 0 00:07:51.273 11:02:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.273 11:02:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.273 --rc genhtml_branch_coverage=1 00:07:51.273 --rc genhtml_function_coverage=1 00:07:51.273 --rc genhtml_legend=1 00:07:51.273 --rc geninfo_all_blocks=1 00:07:51.273 --rc geninfo_unexecuted_blocks=1 00:07:51.273 00:07:51.273 ' 00:07:51.273 11:02:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.273 --rc genhtml_branch_coverage=1 00:07:51.273 --rc genhtml_function_coverage=1 00:07:51.273 --rc genhtml_legend=1 00:07:51.273 --rc geninfo_all_blocks=1 00:07:51.273 --rc geninfo_unexecuted_blocks=1 00:07:51.273 00:07:51.273 ' 00:07:51.273 11:02:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.273 --rc genhtml_branch_coverage=1 00:07:51.273 --rc genhtml_function_coverage=1 00:07:51.273 --rc genhtml_legend=1 00:07:51.273 --rc geninfo_all_blocks=1 00:07:51.273 --rc geninfo_unexecuted_blocks=1 00:07:51.273 00:07:51.273 ' 00:07:51.273 11:02:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.273 --rc genhtml_branch_coverage=1 00:07:51.273 --rc genhtml_function_coverage=1 00:07:51.273 --rc genhtml_legend=1 00:07:51.273 --rc geninfo_all_blocks=1 00:07:51.273 --rc geninfo_unexecuted_blocks=1 00:07:51.273 00:07:51.273 ' 00:07:51.273 11:02:11 -- app/version.sh@17 -- # get_header_version major 00:07:51.274 11:02:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:51.274 11:02:11 -- app/version.sh@14 -- # cut -f2 00:07:51.274 11:02:11 -- app/version.sh@14 -- # tr -d '"' 00:07:51.274 11:02:11 -- app/version.sh@17 -- # major=24 00:07:51.274 11:02:11 -- app/version.sh@18 -- # get_header_version minor 00:07:51.274 11:02:11 -- app/version.sh@14 -- # cut -f2 00:07:51.274 11:02:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:51.274 11:02:11 -- app/version.sh@14 -- # tr -d '"' 00:07:51.274 11:02:11 -- app/version.sh@18 -- # minor=1 00:07:51.274 11:02:11 -- app/version.sh@19 -- # get_header_version patch 00:07:51.274 11:02:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:51.274 11:02:11 -- app/version.sh@14 -- # cut -f2 00:07:51.274 11:02:11 -- app/version.sh@14 -- # tr -d '"' 00:07:51.274 11:02:11 -- app/version.sh@19 -- # patch=1 00:07:51.274 11:02:11 -- app/version.sh@20 -- # get_header_version suffix 00:07:51.274 11:02:11 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:51.274 11:02:11 -- app/version.sh@14 -- # cut -f2 00:07:51.274 11:02:11 -- app/version.sh@14 -- # tr -d '"' 00:07:51.274 11:02:11 -- app/version.sh@20 -- # suffix=-pre 00:07:51.274 11:02:11 -- app/version.sh@22 -- # version=24.1 00:07:51.274 11:02:11 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:51.274 11:02:11 -- app/version.sh@25 -- # version=24.1.1 00:07:51.274 11:02:11 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:51.274 11:02:11 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:51.274 11:02:11 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:51.274 11:02:11 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:51.274 11:02:11 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:51.274 00:07:51.274 real 0m0.221s 00:07:51.274 user 0m0.131s 00:07:51.274 sys 0m0.130s 00:07:51.274 11:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.274 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:51.274 ************************************ 00:07:51.274 END TEST version 00:07:51.274 ************************************ 00:07:51.274 11:02:11 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@191 -- # uname -s 00:07:51.274 11:02:11 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:51.274 11:02:11 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:51.274 11:02:11 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:51.274 11:02:11 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:51.274 11:02:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.274 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:51.274 11:02:11 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:51.274 11:02:11 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:07:51.274 11:02:11 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:51.274 11:02:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.274 11:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.274 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:51.274 ************************************ 00:07:51.274 START TEST nvmf_rdma 00:07:51.274 ************************************ 00:07:51.274 11:02:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:51.533 * Looking for test storage... 00:07:51.533 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:51.533 11:02:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.533 11:02:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.533 11:02:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.533 11:02:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.533 11:02:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.533 11:02:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.533 11:02:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.533 11:02:11 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.533 11:02:11 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.533 11:02:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.533 11:02:11 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.533 11:02:11 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.533 11:02:11 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.533 11:02:11 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.533 11:02:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.533 11:02:11 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.533 11:02:11 -- scripts/common.sh@344 -- # : 1 00:07:51.533 11:02:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.533 11:02:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.533 11:02:11 -- scripts/common.sh@364 -- # decimal 1 00:07:51.533 11:02:11 -- scripts/common.sh@352 -- # local d=1 00:07:51.533 11:02:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.533 11:02:11 -- scripts/common.sh@354 -- # echo 1 00:07:51.533 11:02:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.533 11:02:11 -- scripts/common.sh@365 -- # decimal 2 00:07:51.533 11:02:11 -- scripts/common.sh@352 -- # local d=2 00:07:51.533 11:02:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.533 11:02:11 -- scripts/common.sh@354 -- # echo 2 00:07:51.533 11:02:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.533 11:02:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.533 11:02:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.533 11:02:11 -- scripts/common.sh@367 -- # return 0 00:07:51.533 11:02:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.533 11:02:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.533 --rc genhtml_branch_coverage=1 00:07:51.533 --rc genhtml_function_coverage=1 00:07:51.533 --rc genhtml_legend=1 00:07:51.533 --rc geninfo_all_blocks=1 00:07:51.533 --rc geninfo_unexecuted_blocks=1 00:07:51.533 00:07:51.533 ' 00:07:51.533 11:02:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.533 --rc genhtml_branch_coverage=1 00:07:51.533 --rc genhtml_function_coverage=1 00:07:51.533 --rc genhtml_legend=1 00:07:51.533 --rc geninfo_all_blocks=1 00:07:51.533 --rc geninfo_unexecuted_blocks=1 00:07:51.533 00:07:51.533 ' 00:07:51.533 11:02:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.533 --rc genhtml_branch_coverage=1 00:07:51.533 --rc genhtml_function_coverage=1 00:07:51.533 --rc genhtml_legend=1 00:07:51.533 --rc geninfo_all_blocks=1 00:07:51.533 --rc geninfo_unexecuted_blocks=1 00:07:51.533 00:07:51.533 ' 00:07:51.533 11:02:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.533 --rc genhtml_branch_coverage=1 00:07:51.533 --rc genhtml_function_coverage=1 00:07:51.533 --rc genhtml_legend=1 00:07:51.533 --rc geninfo_all_blocks=1 00:07:51.533 --rc geninfo_unexecuted_blocks=1 00:07:51.533 00:07:51.533 ' 00:07:51.533 11:02:11 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:51.533 11:02:11 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:51.533 11:02:11 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.533 11:02:11 -- nvmf/common.sh@7 -- # uname -s 00:07:51.534 11:02:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.534 11:02:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.534 11:02:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.534 11:02:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.534 11:02:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.534 11:02:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.534 11:02:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.534 11:02:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.534 11:02:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.534 11:02:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.534 11:02:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:51.534 11:02:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:51.534 11:02:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.534 11:02:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.534 11:02:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.534 11:02:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:51.534 11:02:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.534 11:02:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.534 11:02:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.534 11:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.534 11:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.534 11:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.534 11:02:12 -- paths/export.sh@5 -- # export PATH 00:07:51.534 11:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.534 11:02:12 -- nvmf/common.sh@46 -- # : 0 00:07:51.534 11:02:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:51.534 11:02:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:51.534 11:02:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:51.534 11:02:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.534 11:02:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.534 11:02:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:51.534 11:02:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:51.534 11:02:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:51.534 11:02:12 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:51.534 11:02:12 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:51.534 11:02:12 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:51.534 11:02:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.534 11:02:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.534 11:02:12 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:51.534 11:02:12 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:51.534 11:02:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.534 11:02:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.534 11:02:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.534 ************************************ 00:07:51.534 START TEST nvmf_example 00:07:51.534 ************************************ 00:07:51.534 11:02:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:51.534 * Looking for test storage... 00:07:51.794 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:51.794 11:02:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.794 11:02:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.794 11:02:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.794 11:02:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.794 11:02:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.794 11:02:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.794 11:02:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.794 11:02:12 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.794 11:02:12 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.794 11:02:12 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.794 11:02:12 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.794 11:02:12 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.794 11:02:12 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.794 11:02:12 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.794 11:02:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.794 11:02:12 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.794 11:02:12 -- scripts/common.sh@344 -- # : 1 00:07:51.794 11:02:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.794 11:02:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.794 11:02:12 -- scripts/common.sh@364 -- # decimal 1 00:07:51.794 11:02:12 -- scripts/common.sh@352 -- # local d=1 00:07:51.794 11:02:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.794 11:02:12 -- scripts/common.sh@354 -- # echo 1 00:07:51.794 11:02:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.794 11:02:12 -- scripts/common.sh@365 -- # decimal 2 00:07:51.794 11:02:12 -- scripts/common.sh@352 -- # local d=2 00:07:51.794 11:02:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.794 11:02:12 -- scripts/common.sh@354 -- # echo 2 00:07:51.794 11:02:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.794 11:02:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.794 11:02:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.794 11:02:12 -- scripts/common.sh@367 -- # return 0 00:07:51.794 11:02:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.794 11:02:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.794 --rc genhtml_branch_coverage=1 00:07:51.794 --rc genhtml_function_coverage=1 00:07:51.794 --rc genhtml_legend=1 00:07:51.794 --rc geninfo_all_blocks=1 00:07:51.794 --rc geninfo_unexecuted_blocks=1 00:07:51.794 00:07:51.794 ' 00:07:51.794 11:02:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.794 --rc genhtml_branch_coverage=1 00:07:51.794 --rc genhtml_function_coverage=1 00:07:51.794 --rc genhtml_legend=1 00:07:51.794 --rc geninfo_all_blocks=1 00:07:51.794 --rc geninfo_unexecuted_blocks=1 00:07:51.794 00:07:51.794 ' 00:07:51.794 11:02:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.794 --rc genhtml_branch_coverage=1 00:07:51.794 --rc genhtml_function_coverage=1 00:07:51.794 --rc genhtml_legend=1 00:07:51.794 --rc geninfo_all_blocks=1 00:07:51.794 --rc geninfo_unexecuted_blocks=1 00:07:51.794 00:07:51.794 ' 00:07:51.794 11:02:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.794 --rc genhtml_branch_coverage=1 00:07:51.794 --rc genhtml_function_coverage=1 00:07:51.794 --rc genhtml_legend=1 00:07:51.794 --rc geninfo_all_blocks=1 00:07:51.794 --rc geninfo_unexecuted_blocks=1 00:07:51.794 00:07:51.794 ' 00:07:51.794 11:02:12 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.794 11:02:12 -- nvmf/common.sh@7 -- # uname -s 00:07:51.794 11:02:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.794 11:02:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.794 11:02:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.794 11:02:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.794 11:02:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.794 11:02:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.794 11:02:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.794 11:02:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.794 11:02:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.794 11:02:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.794 11:02:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:51.794 11:02:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:51.794 11:02:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.794 11:02:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.794 11:02:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.794 11:02:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:51.794 11:02:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.794 11:02:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.794 11:02:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.794 11:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.794 11:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.794 11:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.794 11:02:12 -- paths/export.sh@5 -- # export PATH 00:07:51.794 11:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.794 11:02:12 -- nvmf/common.sh@46 -- # : 0 00:07:51.794 11:02:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:51.794 11:02:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:51.794 11:02:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:51.794 11:02:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.794 11:02:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.794 11:02:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:51.794 11:02:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:51.794 11:02:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:51.794 11:02:12 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:51.794 11:02:12 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:51.794 11:02:12 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:51.794 11:02:12 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:51.794 11:02:12 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:51.794 11:02:12 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:51.795 11:02:12 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:51.795 11:02:12 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:51.795 11:02:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.795 11:02:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.795 11:02:12 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:51.795 11:02:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:51.795 11:02:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.795 11:02:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:51.795 11:02:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:51.795 11:02:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:51.795 11:02:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.795 11:02:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.795 11:02:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.795 11:02:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:51.795 11:02:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:51.795 11:02:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:51.795 11:02:12 -- common/autotest_common.sh@10 -- # set +x 00:07:58.418 11:02:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:58.418 11:02:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:58.418 11:02:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:58.418 11:02:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:58.418 11:02:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:58.418 11:02:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:58.418 11:02:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:58.418 11:02:17 -- nvmf/common.sh@294 -- # net_devs=() 00:07:58.418 11:02:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:58.418 11:02:17 -- nvmf/common.sh@295 -- # e810=() 00:07:58.418 11:02:17 -- nvmf/common.sh@295 -- # local -ga e810 00:07:58.418 11:02:17 -- nvmf/common.sh@296 -- # x722=() 00:07:58.418 11:02:17 -- nvmf/common.sh@296 -- # local -ga x722 00:07:58.418 11:02:17 -- nvmf/common.sh@297 -- # mlx=() 00:07:58.418 11:02:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:58.418 11:02:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.418 11:02:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:58.418 11:02:17 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:58.418 11:02:17 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:58.418 11:02:17 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:58.418 11:02:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:58.418 11:02:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:58.418 11:02:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:58.418 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:58.418 11:02:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.418 11:02:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:58.418 11:02:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:58.418 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:58.418 11:02:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.418 11:02:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:58.418 11:02:17 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:58.418 11:02:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.418 11:02:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:58.418 11:02:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.418 11:02:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:58.418 Found net devices under 0000:18:00.0: mlx_0_0 00:07:58.418 11:02:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.418 11:02:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:58.418 11:02:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.418 11:02:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:58.418 11:02:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.418 11:02:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:58.418 Found net devices under 0000:18:00.1: mlx_0_1 00:07:58.418 11:02:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.418 11:02:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:58.418 11:02:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:58.418 11:02:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:58.418 11:02:17 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:58.418 11:02:17 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:58.418 11:02:17 -- nvmf/common.sh@57 -- # uname 00:07:58.418 11:02:17 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:58.418 11:02:17 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:58.418 11:02:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:58.418 11:02:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:58.418 11:02:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:58.418 11:02:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:58.418 11:02:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:58.418 11:02:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:58.418 11:02:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:58.418 11:02:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:58.418 11:02:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:58.418 11:02:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.418 11:02:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:58.418 11:02:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:58.418 11:02:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.418 11:02:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:58.418 11:02:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:58.418 11:02:18 -- nvmf/common.sh@104 -- # continue 2 00:07:58.418 11:02:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:58.418 11:02:18 -- nvmf/common.sh@104 -- # continue 2 00:07:58.418 11:02:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:58.418 11:02:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:58.418 11:02:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:58.418 11:02:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:58.418 11:02:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.418 11:02:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.418 11:02:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:58.418 11:02:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:58.418 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.418 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:58.418 altname enp24s0f0np0 00:07:58.418 altname ens785f0np0 00:07:58.418 inet 192.168.100.8/24 scope global mlx_0_0 00:07:58.418 valid_lft forever preferred_lft forever 00:07:58.418 11:02:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:58.418 11:02:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:58.418 11:02:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:58.418 11:02:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:58.418 11:02:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.418 11:02:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.418 11:02:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:58.418 11:02:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:58.418 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.418 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:58.418 altname enp24s0f1np1 00:07:58.418 altname ens785f1np1 00:07:58.418 inet 192.168.100.9/24 scope global mlx_0_1 00:07:58.418 valid_lft forever preferred_lft forever 00:07:58.418 11:02:18 -- nvmf/common.sh@410 -- # return 0 00:07:58.418 11:02:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:58.418 11:02:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:58.418 11:02:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:58.418 11:02:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:58.418 11:02:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.418 11:02:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:58.418 11:02:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:58.418 11:02:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.418 11:02:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:58.418 11:02:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.418 11:02:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.418 11:02:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:58.419 11:02:18 -- nvmf/common.sh@104 -- # continue 2 00:07:58.419 11:02:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:58.419 11:02:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.419 11:02:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.419 11:02:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.419 11:02:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.419 11:02:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:58.419 11:02:18 -- nvmf/common.sh@104 -- # continue 2 00:07:58.419 11:02:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:58.419 11:02:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:58.419 11:02:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:58.419 11:02:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:58.419 11:02:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.419 11:02:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.419 11:02:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:58.419 11:02:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:58.419 11:02:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:58.419 11:02:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:58.419 11:02:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:58.419 11:02:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:58.419 11:02:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:58.419 192.168.100.9' 00:07:58.419 11:02:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:58.419 192.168.100.9' 00:07:58.419 11:02:18 -- nvmf/common.sh@445 -- # head -n 1 00:07:58.419 11:02:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:58.419 11:02:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:58.419 192.168.100.9' 00:07:58.419 11:02:18 -- nvmf/common.sh@446 -- # tail -n +2 00:07:58.419 11:02:18 -- nvmf/common.sh@446 -- # head -n 1 00:07:58.419 11:02:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:58.419 11:02:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:58.419 11:02:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:58.419 11:02:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:58.419 11:02:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:58.419 11:02:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:58.419 11:02:18 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:58.419 11:02:18 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:58.419 11:02:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.419 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:07:58.419 11:02:18 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:58.419 11:02:18 -- target/nvmf_example.sh@34 -- # nvmfpid=1473544 00:07:58.419 11:02:18 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.419 11:02:18 -- target/nvmf_example.sh@36 -- # waitforlisten 1473544 00:07:58.419 11:02:18 -- common/autotest_common.sh@829 -- # '[' -z 1473544 ']' 00:07:58.419 11:02:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.419 11:02:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.419 11:02:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.419 11:02:18 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:58.419 11:02:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.419 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:07:58.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.758 11:02:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.758 11:02:19 -- common/autotest_common.sh@862 -- # return 0 00:07:58.758 11:02:19 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:58.758 11:02:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.759 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.759 11:02:19 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:58.759 11:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.759 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.759 11:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.759 11:02:19 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:58.759 11:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.759 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.759 11:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.759 11:02:19 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:58.759 11:02:19 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:58.759 11:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.759 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.759 11:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.759 11:02:19 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:58.759 11:02:19 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:58.759 11:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.759 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.759 11:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.759 11:02:19 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:58.759 11:02:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.759 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:58.759 11:02:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.759 11:02:19 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:58.759 11:02:19 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:59.018 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.229 Initializing NVMe Controllers 00:08:11.229 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.229 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.229 Initialization complete. Launching workers. 00:08:11.229 ======================================================== 00:08:11.229 Latency(us) 00:08:11.229 Device Information : IOPS MiB/s Average min max 00:08:11.229 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 28839.90 112.66 2218.89 552.57 13022.80 00:08:11.229 ======================================================== 00:08:11.229 Total : 28839.90 112.66 2218.89 552.57 13022.80 00:08:11.229 00:08:11.229 11:02:30 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:11.229 11:02:30 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:11.229 11:02:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:11.229 11:02:30 -- nvmf/common.sh@116 -- # sync 00:08:11.229 11:02:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:11.229 11:02:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:11.229 11:02:30 -- nvmf/common.sh@119 -- # set +e 00:08:11.229 11:02:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:11.229 11:02:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:11.229 rmmod nvme_rdma 00:08:11.229 rmmod nvme_fabrics 00:08:11.229 11:02:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:11.229 11:02:30 -- nvmf/common.sh@123 -- # set -e 00:08:11.229 11:02:30 -- nvmf/common.sh@124 -- # return 0 00:08:11.229 11:02:30 -- nvmf/common.sh@477 -- # '[' -n 1473544 ']' 00:08:11.229 11:02:30 -- nvmf/common.sh@478 -- # killprocess 1473544 00:08:11.229 11:02:30 -- common/autotest_common.sh@936 -- # '[' -z 1473544 ']' 00:08:11.229 11:02:30 -- common/autotest_common.sh@940 -- # kill -0 1473544 00:08:11.229 11:02:30 -- common/autotest_common.sh@941 -- # uname 00:08:11.229 11:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.229 11:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1473544 00:08:11.229 11:02:30 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:11.229 11:02:30 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:11.229 11:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1473544' 00:08:11.229 killing process with pid 1473544 00:08:11.229 11:02:30 -- common/autotest_common.sh@955 -- # kill 1473544 00:08:11.229 11:02:30 -- common/autotest_common.sh@960 -- # wait 1473544 00:08:11.229 nvmf threads initialize successfully 00:08:11.229 bdev subsystem init successfully 00:08:11.229 created a nvmf target service 00:08:11.230 create targets's poll groups done 00:08:11.230 all subsystems of target started 00:08:11.230 nvmf target is running 00:08:11.230 all subsystems of target stopped 00:08:11.230 destroy targets's poll groups done 00:08:11.230 destroyed the nvmf target service 00:08:11.230 bdev subsystem finish successfully 00:08:11.230 nvmf threads destroy successfully 00:08:11.230 11:02:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:11.230 11:02:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:11.230 11:02:30 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:11.230 11:02:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.230 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.230 00:08:11.230 real 0m18.844s 00:08:11.230 user 0m51.770s 00:08:11.230 sys 0m4.896s 00:08:11.230 11:02:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.230 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.230 ************************************ 00:08:11.230 END TEST nvmf_example 00:08:11.230 ************************************ 00:08:11.230 11:02:30 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:11.230 11:02:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.230 11:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.230 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.230 ************************************ 00:08:11.230 START TEST nvmf_filesystem 00:08:11.230 ************************************ 00:08:11.230 11:02:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:11.230 * Looking for test storage... 00:08:11.230 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.230 11:02:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.230 11:02:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.230 11:02:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.230 11:02:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.230 11:02:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.230 11:02:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.230 11:02:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.230 11:02:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.230 11:02:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.230 11:02:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.230 11:02:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.230 11:02:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.230 11:02:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.230 11:02:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.230 11:02:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.230 11:02:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.230 11:02:31 -- scripts/common.sh@344 -- # : 1 00:08:11.230 11:02:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.230 11:02:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.230 11:02:31 -- scripts/common.sh@364 -- # decimal 1 00:08:11.230 11:02:31 -- scripts/common.sh@352 -- # local d=1 00:08:11.230 11:02:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.230 11:02:31 -- scripts/common.sh@354 -- # echo 1 00:08:11.230 11:02:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.230 11:02:31 -- scripts/common.sh@365 -- # decimal 2 00:08:11.230 11:02:31 -- scripts/common.sh@352 -- # local d=2 00:08:11.230 11:02:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.230 11:02:31 -- scripts/common.sh@354 -- # echo 2 00:08:11.230 11:02:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.230 11:02:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.230 11:02:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.230 11:02:31 -- scripts/common.sh@367 -- # return 0 00:08:11.230 11:02:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.230 11:02:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.230 --rc genhtml_branch_coverage=1 00:08:11.230 --rc genhtml_function_coverage=1 00:08:11.230 --rc genhtml_legend=1 00:08:11.230 --rc geninfo_all_blocks=1 00:08:11.230 --rc geninfo_unexecuted_blocks=1 00:08:11.230 00:08:11.230 ' 00:08:11.230 11:02:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.230 --rc genhtml_branch_coverage=1 00:08:11.230 --rc genhtml_function_coverage=1 00:08:11.230 --rc genhtml_legend=1 00:08:11.230 --rc geninfo_all_blocks=1 00:08:11.230 --rc geninfo_unexecuted_blocks=1 00:08:11.230 00:08:11.230 ' 00:08:11.230 11:02:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.230 --rc genhtml_branch_coverage=1 00:08:11.230 --rc genhtml_function_coverage=1 00:08:11.230 --rc genhtml_legend=1 00:08:11.230 --rc geninfo_all_blocks=1 00:08:11.230 --rc geninfo_unexecuted_blocks=1 00:08:11.230 00:08:11.230 ' 00:08:11.230 11:02:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.230 --rc genhtml_branch_coverage=1 00:08:11.230 --rc genhtml_function_coverage=1 00:08:11.230 --rc genhtml_legend=1 00:08:11.230 --rc geninfo_all_blocks=1 00:08:11.230 --rc geninfo_unexecuted_blocks=1 00:08:11.230 00:08:11.230 ' 00:08:11.230 11:02:31 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:11.230 11:02:31 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:11.230 11:02:31 -- common/autotest_common.sh@34 -- # set -e 00:08:11.230 11:02:31 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:11.230 11:02:31 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:11.230 11:02:31 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:11.230 11:02:31 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:11.230 11:02:31 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:11.230 11:02:31 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:11.230 11:02:31 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:11.230 11:02:31 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:11.230 11:02:31 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:11.230 11:02:31 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:11.230 11:02:31 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:11.230 11:02:31 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:11.230 11:02:31 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:11.230 11:02:31 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:11.230 11:02:31 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:11.230 11:02:31 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:11.230 11:02:31 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:11.230 11:02:31 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:11.230 11:02:31 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:11.230 11:02:31 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:11.230 11:02:31 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:11.230 11:02:31 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:11.230 11:02:31 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:11.230 11:02:31 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:11.230 11:02:31 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:11.230 11:02:31 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:11.230 11:02:31 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:11.230 11:02:31 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:11.230 11:02:31 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:11.230 11:02:31 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:11.230 11:02:31 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:11.230 11:02:31 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:11.230 11:02:31 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:11.230 11:02:31 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:11.230 11:02:31 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:11.230 11:02:31 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:11.230 11:02:31 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:11.230 11:02:31 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:11.230 11:02:31 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:11.230 11:02:31 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:11.230 11:02:31 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:11.230 11:02:31 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:11.230 11:02:31 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:11.230 11:02:31 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:11.230 11:02:31 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:11.230 11:02:31 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:11.230 11:02:31 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:11.230 11:02:31 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:11.230 11:02:31 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:11.230 11:02:31 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:11.230 11:02:31 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:11.230 11:02:31 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:11.230 11:02:31 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:11.230 11:02:31 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:11.230 11:02:31 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:11.230 11:02:31 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:11.230 11:02:31 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:11.230 11:02:31 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:11.230 11:02:31 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:11.230 11:02:31 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:11.230 11:02:31 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:11.231 11:02:31 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:11.231 11:02:31 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:11.231 11:02:31 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:11.231 11:02:31 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:11.231 11:02:31 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:11.231 11:02:31 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:11.231 11:02:31 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:11.231 11:02:31 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:11.231 11:02:31 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:11.231 11:02:31 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:11.231 11:02:31 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:11.231 11:02:31 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:11.231 11:02:31 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:11.231 11:02:31 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:11.231 11:02:31 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:11.231 11:02:31 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:11.231 11:02:31 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:11.231 11:02:31 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:11.231 11:02:31 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:11.231 11:02:31 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:11.231 11:02:31 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:11.231 11:02:31 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:11.231 11:02:31 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:11.231 11:02:31 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:11.231 11:02:31 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:11.231 11:02:31 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:11.231 11:02:31 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:11.231 11:02:31 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:11.231 11:02:31 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:11.231 11:02:31 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:11.231 11:02:31 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:11.231 11:02:31 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:11.231 #define SPDK_CONFIG_H 00:08:11.231 #define SPDK_CONFIG_APPS 1 00:08:11.231 #define SPDK_CONFIG_ARCH native 00:08:11.231 #undef SPDK_CONFIG_ASAN 00:08:11.231 #undef SPDK_CONFIG_AVAHI 00:08:11.231 #undef SPDK_CONFIG_CET 00:08:11.231 #define SPDK_CONFIG_COVERAGE 1 00:08:11.231 #define SPDK_CONFIG_CROSS_PREFIX 00:08:11.231 #undef SPDK_CONFIG_CRYPTO 00:08:11.231 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:11.231 #undef SPDK_CONFIG_CUSTOMOCF 00:08:11.231 #undef SPDK_CONFIG_DAOS 00:08:11.231 #define SPDK_CONFIG_DAOS_DIR 00:08:11.231 #define SPDK_CONFIG_DEBUG 1 00:08:11.231 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:11.231 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:11.231 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:11.231 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:11.231 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:11.231 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:11.231 #define SPDK_CONFIG_EXAMPLES 1 00:08:11.231 #undef SPDK_CONFIG_FC 00:08:11.231 #define SPDK_CONFIG_FC_PATH 00:08:11.231 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:11.231 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:11.231 #undef SPDK_CONFIG_FUSE 00:08:11.231 #undef SPDK_CONFIG_FUZZER 00:08:11.231 #define SPDK_CONFIG_FUZZER_LIB 00:08:11.231 #undef SPDK_CONFIG_GOLANG 00:08:11.231 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:11.231 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:11.231 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:11.231 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:11.231 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:11.231 #define SPDK_CONFIG_IDXD 1 00:08:11.231 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:11.231 #undef SPDK_CONFIG_IPSEC_MB 00:08:11.231 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:11.231 #define SPDK_CONFIG_ISAL 1 00:08:11.231 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:11.231 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:11.231 #define SPDK_CONFIG_LIBDIR 00:08:11.231 #undef SPDK_CONFIG_LTO 00:08:11.231 #define SPDK_CONFIG_MAX_LCORES 00:08:11.231 #define SPDK_CONFIG_NVME_CUSE 1 00:08:11.231 #undef SPDK_CONFIG_OCF 00:08:11.231 #define SPDK_CONFIG_OCF_PATH 00:08:11.231 #define SPDK_CONFIG_OPENSSL_PATH 00:08:11.231 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:11.231 #undef SPDK_CONFIG_PGO_USE 00:08:11.231 #define SPDK_CONFIG_PREFIX /usr/local 00:08:11.231 #undef SPDK_CONFIG_RAID5F 00:08:11.231 #undef SPDK_CONFIG_RBD 00:08:11.231 #define SPDK_CONFIG_RDMA 1 00:08:11.231 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:11.231 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:11.231 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:11.231 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:11.231 #define SPDK_CONFIG_SHARED 1 00:08:11.231 #undef SPDK_CONFIG_SMA 00:08:11.231 #define SPDK_CONFIG_TESTS 1 00:08:11.231 #undef SPDK_CONFIG_TSAN 00:08:11.231 #define SPDK_CONFIG_UBLK 1 00:08:11.231 #define SPDK_CONFIG_UBSAN 1 00:08:11.231 #undef SPDK_CONFIG_UNIT_TESTS 00:08:11.231 #undef SPDK_CONFIG_URING 00:08:11.231 #define SPDK_CONFIG_URING_PATH 00:08:11.231 #undef SPDK_CONFIG_URING_ZNS 00:08:11.231 #undef SPDK_CONFIG_USDT 00:08:11.231 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:11.231 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:11.231 #undef SPDK_CONFIG_VFIO_USER 00:08:11.231 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:11.231 #define SPDK_CONFIG_VHOST 1 00:08:11.231 #define SPDK_CONFIG_VIRTIO 1 00:08:11.231 #undef SPDK_CONFIG_VTUNE 00:08:11.231 #define SPDK_CONFIG_VTUNE_DIR 00:08:11.231 #define SPDK_CONFIG_WERROR 1 00:08:11.231 #define SPDK_CONFIG_WPDK_DIR 00:08:11.231 #undef SPDK_CONFIG_XNVME 00:08:11.231 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:11.231 11:02:31 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:11.231 11:02:31 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.231 11:02:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.231 11:02:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.231 11:02:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.231 11:02:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.231 11:02:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.231 11:02:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.231 11:02:31 -- paths/export.sh@5 -- # export PATH 00:08:11.231 11:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.231 11:02:31 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:11.231 11:02:31 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:11.231 11:02:31 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:11.231 11:02:31 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:11.231 11:02:31 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:11.231 11:02:31 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:11.231 11:02:31 -- pm/common@16 -- # TEST_TAG=N/A 00:08:11.231 11:02:31 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:11.231 11:02:31 -- common/autotest_common.sh@52 -- # : 1 00:08:11.231 11:02:31 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:11.231 11:02:31 -- common/autotest_common.sh@56 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:11.231 11:02:31 -- common/autotest_common.sh@58 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:11.231 11:02:31 -- common/autotest_common.sh@60 -- # : 1 00:08:11.231 11:02:31 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:11.231 11:02:31 -- common/autotest_common.sh@62 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:11.231 11:02:31 -- common/autotest_common.sh@64 -- # : 00:08:11.231 11:02:31 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:11.231 11:02:31 -- common/autotest_common.sh@66 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:11.231 11:02:31 -- common/autotest_common.sh@68 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:11.231 11:02:31 -- common/autotest_common.sh@70 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:11.231 11:02:31 -- common/autotest_common.sh@72 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:11.231 11:02:31 -- common/autotest_common.sh@74 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:11.231 11:02:31 -- common/autotest_common.sh@76 -- # : 0 00:08:11.231 11:02:31 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:11.232 11:02:31 -- common/autotest_common.sh@78 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:11.232 11:02:31 -- common/autotest_common.sh@80 -- # : 1 00:08:11.232 11:02:31 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:11.232 11:02:31 -- common/autotest_common.sh@82 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:11.232 11:02:31 -- common/autotest_common.sh@84 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:11.232 11:02:31 -- common/autotest_common.sh@86 -- # : 1 00:08:11.232 11:02:31 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:11.232 11:02:31 -- common/autotest_common.sh@88 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:11.232 11:02:31 -- common/autotest_common.sh@90 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:11.232 11:02:31 -- common/autotest_common.sh@92 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:11.232 11:02:31 -- common/autotest_common.sh@94 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:11.232 11:02:31 -- common/autotest_common.sh@96 -- # : rdma 00:08:11.232 11:02:31 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:11.232 11:02:31 -- common/autotest_common.sh@98 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:11.232 11:02:31 -- common/autotest_common.sh@100 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:11.232 11:02:31 -- common/autotest_common.sh@102 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:11.232 11:02:31 -- common/autotest_common.sh@104 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:11.232 11:02:31 -- common/autotest_common.sh@106 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:11.232 11:02:31 -- common/autotest_common.sh@108 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:11.232 11:02:31 -- common/autotest_common.sh@110 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:11.232 11:02:31 -- common/autotest_common.sh@112 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:11.232 11:02:31 -- common/autotest_common.sh@114 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:11.232 11:02:31 -- common/autotest_common.sh@116 -- # : 1 00:08:11.232 11:02:31 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:11.232 11:02:31 -- common/autotest_common.sh@118 -- # : 00:08:11.232 11:02:31 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:11.232 11:02:31 -- common/autotest_common.sh@120 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:11.232 11:02:31 -- common/autotest_common.sh@122 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:11.232 11:02:31 -- common/autotest_common.sh@124 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:11.232 11:02:31 -- common/autotest_common.sh@126 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:11.232 11:02:31 -- common/autotest_common.sh@128 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:11.232 11:02:31 -- common/autotest_common.sh@130 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:11.232 11:02:31 -- common/autotest_common.sh@132 -- # : 00:08:11.232 11:02:31 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:11.232 11:02:31 -- common/autotest_common.sh@134 -- # : true 00:08:11.232 11:02:31 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:11.232 11:02:31 -- common/autotest_common.sh@136 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:11.232 11:02:31 -- common/autotest_common.sh@138 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:11.232 11:02:31 -- common/autotest_common.sh@140 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:11.232 11:02:31 -- common/autotest_common.sh@142 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:11.232 11:02:31 -- common/autotest_common.sh@144 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:11.232 11:02:31 -- common/autotest_common.sh@146 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:11.232 11:02:31 -- common/autotest_common.sh@148 -- # : mlx5 00:08:11.232 11:02:31 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:11.232 11:02:31 -- common/autotest_common.sh@150 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:11.232 11:02:31 -- common/autotest_common.sh@152 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:11.232 11:02:31 -- common/autotest_common.sh@154 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:11.232 11:02:31 -- common/autotest_common.sh@156 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:11.232 11:02:31 -- common/autotest_common.sh@158 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:11.232 11:02:31 -- common/autotest_common.sh@160 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:11.232 11:02:31 -- common/autotest_common.sh@163 -- # : 00:08:11.232 11:02:31 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:11.232 11:02:31 -- common/autotest_common.sh@165 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:11.232 11:02:31 -- common/autotest_common.sh@167 -- # : 0 00:08:11.232 11:02:31 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:11.232 11:02:31 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.232 11:02:31 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.232 11:02:31 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.232 11:02:31 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:11.232 11:02:31 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:11.232 11:02:31 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:11.232 11:02:31 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:11.232 11:02:31 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.232 11:02:31 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.232 11:02:31 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.232 11:02:31 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.232 11:02:31 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:11.232 11:02:31 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:11.232 11:02:31 -- common/autotest_common.sh@196 -- # cat 00:08:11.232 11:02:31 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:11.232 11:02:31 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.232 11:02:31 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.232 11:02:31 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.232 11:02:31 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.232 11:02:31 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:11.232 11:02:31 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:11.232 11:02:31 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:11.232 11:02:31 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:11.232 11:02:31 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:11.232 11:02:31 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:11.232 11:02:31 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.232 11:02:31 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.233 11:02:31 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.233 11:02:31 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.233 11:02:31 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:11.233 11:02:31 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:11.233 11:02:31 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.233 11:02:31 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.233 11:02:31 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:11.233 11:02:31 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:11.233 11:02:31 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:11.233 11:02:31 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:11.233 11:02:31 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:11.233 11:02:31 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:11.233 11:02:31 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:11.233 11:02:31 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:11.233 11:02:31 -- common/autotest_common.sh@259 -- # valgrind= 00:08:11.233 11:02:31 -- common/autotest_common.sh@265 -- # uname -s 00:08:11.233 11:02:31 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:11.233 11:02:31 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:11.233 11:02:31 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:11.233 11:02:31 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:11.233 11:02:31 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:11.233 11:02:31 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:11.233 11:02:31 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:11.233 11:02:31 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:11.233 11:02:31 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:11.233 11:02:31 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:11.233 11:02:31 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:11.233 11:02:31 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:11.233 11:02:31 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:11.233 11:02:31 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:11.233 11:02:31 -- common/autotest_common.sh@319 -- # [[ -z 1476023 ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@319 -- # kill -0 1476023 00:08:11.233 11:02:31 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:11.233 11:02:31 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:11.233 11:02:31 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:11.233 11:02:31 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:11.233 11:02:31 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:11.233 11:02:31 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:11.233 11:02:31 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:11.233 11:02:31 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.aaimt1 00:08:11.233 11:02:31 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:11.233 11:02:31 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aaimt1/tests/target /tmp/spdk.aaimt1 00:08:11.233 11:02:31 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@328 -- # df -T 00:08:11.233 11:02:31 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=67800551424 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=78631596032 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=10831044608 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=39266304000 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=39315795968 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=49491968 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=15716851712 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=15726321664 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=9469952 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=39314792448 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=39315800064 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=1007616 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # avails["$mount"]=7863144448 00:08:11.233 11:02:31 -- common/autotest_common.sh@363 -- # sizes["$mount"]=7863156736 00:08:11.233 11:02:31 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:11.233 11:02:31 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.233 11:02:31 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:11.233 * Looking for test storage... 00:08:11.233 11:02:31 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:11.233 11:02:31 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:11.233 11:02:31 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.233 11:02:31 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:11.233 11:02:31 -- common/autotest_common.sh@373 -- # mount=/ 00:08:11.233 11:02:31 -- common/autotest_common.sh@375 -- # target_space=67800551424 00:08:11.233 11:02:31 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:11.233 11:02:31 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:11.233 11:02:31 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@382 -- # new_size=13045637120 00:08:11.233 11:02:31 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:11.233 11:02:31 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.233 11:02:31 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.233 11:02:31 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.233 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.233 11:02:31 -- common/autotest_common.sh@390 -- # return 0 00:08:11.233 11:02:31 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:11.233 11:02:31 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:11.233 11:02:31 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:11.233 11:02:31 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:11.233 11:02:31 -- common/autotest_common.sh@1682 -- # true 00:08:11.233 11:02:31 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:11.233 11:02:31 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@27 -- # exec 00:08:11.233 11:02:31 -- common/autotest_common.sh@29 -- # exec 00:08:11.233 11:02:31 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:11.233 11:02:31 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:11.233 11:02:31 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:11.233 11:02:31 -- common/autotest_common.sh@18 -- # set -x 00:08:11.233 11:02:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.233 11:02:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.233 11:02:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.233 11:02:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.233 11:02:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.233 11:02:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.233 11:02:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.233 11:02:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.233 11:02:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.233 11:02:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.233 11:02:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.233 11:02:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.233 11:02:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.233 11:02:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.233 11:02:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.234 11:02:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.234 11:02:31 -- scripts/common.sh@344 -- # : 1 00:08:11.234 11:02:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.234 11:02:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.234 11:02:31 -- scripts/common.sh@364 -- # decimal 1 00:08:11.234 11:02:31 -- scripts/common.sh@352 -- # local d=1 00:08:11.234 11:02:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.234 11:02:31 -- scripts/common.sh@354 -- # echo 1 00:08:11.234 11:02:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.234 11:02:31 -- scripts/common.sh@365 -- # decimal 2 00:08:11.234 11:02:31 -- scripts/common.sh@352 -- # local d=2 00:08:11.234 11:02:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.234 11:02:31 -- scripts/common.sh@354 -- # echo 2 00:08:11.234 11:02:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.234 11:02:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.234 11:02:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.234 11:02:31 -- scripts/common.sh@367 -- # return 0 00:08:11.234 11:02:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.234 11:02:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.234 --rc genhtml_branch_coverage=1 00:08:11.234 --rc genhtml_function_coverage=1 00:08:11.234 --rc genhtml_legend=1 00:08:11.234 --rc geninfo_all_blocks=1 00:08:11.234 --rc geninfo_unexecuted_blocks=1 00:08:11.234 00:08:11.234 ' 00:08:11.234 11:02:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.234 --rc genhtml_branch_coverage=1 00:08:11.234 --rc genhtml_function_coverage=1 00:08:11.234 --rc genhtml_legend=1 00:08:11.234 --rc geninfo_all_blocks=1 00:08:11.234 --rc geninfo_unexecuted_blocks=1 00:08:11.234 00:08:11.234 ' 00:08:11.234 11:02:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.234 --rc genhtml_branch_coverage=1 00:08:11.234 --rc genhtml_function_coverage=1 00:08:11.234 --rc genhtml_legend=1 00:08:11.234 --rc geninfo_all_blocks=1 00:08:11.234 --rc geninfo_unexecuted_blocks=1 00:08:11.234 00:08:11.234 ' 00:08:11.234 11:02:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.234 --rc genhtml_branch_coverage=1 00:08:11.234 --rc genhtml_function_coverage=1 00:08:11.234 --rc genhtml_legend=1 00:08:11.234 --rc geninfo_all_blocks=1 00:08:11.234 --rc geninfo_unexecuted_blocks=1 00:08:11.234 00:08:11.234 ' 00:08:11.234 11:02:31 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.234 11:02:31 -- nvmf/common.sh@7 -- # uname -s 00:08:11.234 11:02:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.234 11:02:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.234 11:02:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.234 11:02:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.234 11:02:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.234 11:02:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.234 11:02:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.234 11:02:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.234 11:02:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.234 11:02:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.234 11:02:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:11.234 11:02:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:11.234 11:02:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.234 11:02:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.234 11:02:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.234 11:02:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.234 11:02:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.234 11:02:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.234 11:02:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.234 11:02:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.234 11:02:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.234 11:02:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.234 11:02:31 -- paths/export.sh@5 -- # export PATH 00:08:11.234 11:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.234 11:02:31 -- nvmf/common.sh@46 -- # : 0 00:08:11.234 11:02:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.234 11:02:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.234 11:02:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.234 11:02:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.234 11:02:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.234 11:02:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.234 11:02:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.234 11:02:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.234 11:02:31 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:11.234 11:02:31 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:11.234 11:02:31 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:11.234 11:02:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:11.234 11:02:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.234 11:02:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.234 11:02:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.234 11:02:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.234 11:02:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.234 11:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.234 11:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.234 11:02:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:11.234 11:02:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:11.234 11:02:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:11.234 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:08:16.508 11:02:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:16.508 11:02:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:16.508 11:02:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:16.508 11:02:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:16.508 11:02:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:16.508 11:02:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:16.508 11:02:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:16.508 11:02:36 -- nvmf/common.sh@294 -- # net_devs=() 00:08:16.508 11:02:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:16.508 11:02:36 -- nvmf/common.sh@295 -- # e810=() 00:08:16.508 11:02:36 -- nvmf/common.sh@295 -- # local -ga e810 00:08:16.508 11:02:36 -- nvmf/common.sh@296 -- # x722=() 00:08:16.508 11:02:36 -- nvmf/common.sh@296 -- # local -ga x722 00:08:16.508 11:02:36 -- nvmf/common.sh@297 -- # mlx=() 00:08:16.508 11:02:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:16.508 11:02:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.508 11:02:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:16.508 11:02:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:16.508 11:02:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:16.508 11:02:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:16.508 11:02:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:16.508 11:02:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:16.508 11:02:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:16.508 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:16.508 11:02:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.508 11:02:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:16.508 11:02:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:16.508 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:16.508 11:02:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.508 11:02:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:16.508 11:02:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:16.508 11:02:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.508 11:02:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:16.508 11:02:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.508 11:02:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:16.508 Found net devices under 0000:18:00.0: mlx_0_0 00:08:16.508 11:02:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.508 11:02:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:16.508 11:02:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.508 11:02:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:16.508 11:02:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.508 11:02:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:16.508 Found net devices under 0000:18:00.1: mlx_0_1 00:08:16.508 11:02:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.508 11:02:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:16.508 11:02:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:16.508 11:02:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:16.508 11:02:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:16.508 11:02:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:16.508 11:02:36 -- nvmf/common.sh@57 -- # uname 00:08:16.508 11:02:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:16.508 11:02:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:16.508 11:02:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:16.508 11:02:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:16.508 11:02:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:16.508 11:02:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:16.508 11:02:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:16.508 11:02:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:16.508 11:02:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:16.508 11:02:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.508 11:02:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:16.508 11:02:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.508 11:02:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:16.508 11:02:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:16.508 11:02:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.508 11:02:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:16.508 11:02:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:16.508 11:02:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.508 11:02:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.508 11:02:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:16.508 11:02:37 -- nvmf/common.sh@104 -- # continue 2 00:08:16.508 11:02:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:16.508 11:02:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.508 11:02:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.508 11:02:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.508 11:02:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.508 11:02:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:16.508 11:02:37 -- nvmf/common.sh@104 -- # continue 2 00:08:16.508 11:02:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:16.768 11:02:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:16.768 11:02:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:16.768 11:02:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:16.768 11:02:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:16.768 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.768 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:16.768 altname enp24s0f0np0 00:08:16.768 altname ens785f0np0 00:08:16.768 inet 192.168.100.8/24 scope global mlx_0_0 00:08:16.768 valid_lft forever preferred_lft forever 00:08:16.768 11:02:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:16.768 11:02:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:16.768 11:02:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:16.768 11:02:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:16.768 11:02:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:16.768 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.768 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:16.768 altname enp24s0f1np1 00:08:16.768 altname ens785f1np1 00:08:16.768 inet 192.168.100.9/24 scope global mlx_0_1 00:08:16.768 valid_lft forever preferred_lft forever 00:08:16.768 11:02:37 -- nvmf/common.sh@410 -- # return 0 00:08:16.768 11:02:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:16.768 11:02:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.768 11:02:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:16.768 11:02:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:16.768 11:02:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:16.768 11:02:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.768 11:02:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:16.768 11:02:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:16.768 11:02:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.768 11:02:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:16.768 11:02:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:16.768 11:02:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.768 11:02:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.768 11:02:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@104 -- # continue 2 00:08:16.768 11:02:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:16.768 11:02:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.768 11:02:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.768 11:02:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.768 11:02:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.768 11:02:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@104 -- # continue 2 00:08:16.768 11:02:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:16.768 11:02:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:16.768 11:02:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:16.768 11:02:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:16.768 11:02:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:16.768 11:02:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.768 192.168.100.9' 00:08:16.768 11:02:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:16.768 192.168.100.9' 00:08:16.768 11:02:37 -- nvmf/common.sh@445 -- # head -n 1 00:08:16.768 11:02:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.768 11:02:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:16.768 192.168.100.9' 00:08:16.768 11:02:37 -- nvmf/common.sh@446 -- # tail -n +2 00:08:16.768 11:02:37 -- nvmf/common.sh@446 -- # head -n 1 00:08:16.768 11:02:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.768 11:02:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:16.768 11:02:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.768 11:02:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:16.768 11:02:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:16.768 11:02:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:16.768 11:02:37 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:16.768 11:02:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:16.768 11:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.768 11:02:37 -- common/autotest_common.sh@10 -- # set +x 00:08:16.768 ************************************ 00:08:16.768 START TEST nvmf_filesystem_no_in_capsule 00:08:16.768 ************************************ 00:08:16.768 11:02:37 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:16.768 11:02:37 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:16.768 11:02:37 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:16.768 11:02:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:16.768 11:02:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.768 11:02:37 -- common/autotest_common.sh@10 -- # set +x 00:08:16.768 11:02:37 -- nvmf/common.sh@469 -- # nvmfpid=1479289 00:08:16.768 11:02:37 -- nvmf/common.sh@470 -- # waitforlisten 1479289 00:08:16.768 11:02:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.768 11:02:37 -- common/autotest_common.sh@829 -- # '[' -z 1479289 ']' 00:08:16.768 11:02:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.768 11:02:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.768 11:02:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.768 11:02:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.768 11:02:37 -- common/autotest_common.sh@10 -- # set +x 00:08:16.768 [2024-12-13 11:02:37.266696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.768 [2024-12-13 11:02:37.266741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.768 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.768 [2024-12-13 11:02:37.319805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.027 [2024-12-13 11:02:37.392238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.027 [2024-12-13 11:02:37.392352] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.027 [2024-12-13 11:02:37.392360] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.027 [2024-12-13 11:02:37.392366] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.027 [2024-12-13 11:02:37.392408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.027 [2024-12-13 11:02:37.392506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.027 [2024-12-13 11:02:37.392514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.027 [2024-12-13 11:02:37.392515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.595 11:02:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.595 11:02:38 -- common/autotest_common.sh@862 -- # return 0 00:08:17.595 11:02:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:17.595 11:02:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.595 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.595 11:02:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.595 11:02:38 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:17.596 11:02:38 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:17.596 11:02:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.596 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.596 [2024-12-13 11:02:38.098488] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:17.596 [2024-12-13 11:02:38.116727] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19fa960/0x19fee50) succeed. 00:08:17.596 [2024-12-13 11:02:38.125029] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19fbf50/0x1a404f0) succeed. 00:08:17.855 11:02:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.855 11:02:38 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:17.855 11:02:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.855 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 Malloc1 00:08:17.855 11:02:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.855 11:02:38 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:17.855 11:02:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.855 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 11:02:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.855 11:02:38 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.855 11:02:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.855 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 11:02:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.855 11:02:38 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:17.855 11:02:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.855 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 [2024-12-13 11:02:38.351974] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:17.855 11:02:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.855 11:02:38 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:17.855 11:02:38 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:17.855 11:02:38 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:17.855 11:02:38 -- common/autotest_common.sh@1369 -- # local bs 00:08:17.855 11:02:38 -- common/autotest_common.sh@1370 -- # local nb 00:08:17.855 11:02:38 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:17.855 11:02:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.855 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 11:02:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.855 11:02:38 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:17.855 { 00:08:17.855 "name": "Malloc1", 00:08:17.855 "aliases": [ 00:08:17.855 "9e1ef02f-baa5-49f7-b0fc-15aa913bd674" 00:08:17.855 ], 00:08:17.855 "product_name": "Malloc disk", 00:08:17.855 "block_size": 512, 00:08:17.855 "num_blocks": 1048576, 00:08:17.855 "uuid": "9e1ef02f-baa5-49f7-b0fc-15aa913bd674", 00:08:17.855 "assigned_rate_limits": { 00:08:17.855 "rw_ios_per_sec": 0, 00:08:17.855 "rw_mbytes_per_sec": 0, 00:08:17.855 "r_mbytes_per_sec": 0, 00:08:17.855 "w_mbytes_per_sec": 0 00:08:17.855 }, 00:08:17.855 "claimed": true, 00:08:17.855 "claim_type": "exclusive_write", 00:08:17.855 "zoned": false, 00:08:17.855 "supported_io_types": { 00:08:17.855 "read": true, 00:08:17.855 "write": true, 00:08:17.855 "unmap": true, 00:08:17.855 "write_zeroes": true, 00:08:17.855 "flush": true, 00:08:17.855 "reset": true, 00:08:17.855 "compare": false, 00:08:17.855 "compare_and_write": false, 00:08:17.855 "abort": true, 00:08:17.855 "nvme_admin": false, 00:08:17.855 "nvme_io": false 00:08:17.855 }, 00:08:17.855 "memory_domains": [ 00:08:17.855 { 00:08:17.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.855 "dma_device_type": 2 00:08:17.855 } 00:08:17.855 ], 00:08:17.855 "driver_specific": {} 00:08:17.855 } 00:08:17.855 ]' 00:08:17.855 11:02:38 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:17.855 11:02:38 -- common/autotest_common.sh@1372 -- # bs=512 00:08:18.114 11:02:38 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:18.114 11:02:38 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:18.114 11:02:38 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:18.114 11:02:38 -- common/autotest_common.sh@1377 -- # echo 512 00:08:18.114 11:02:38 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:18.114 11:02:38 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:19.050 11:02:39 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.050 11:02:39 -- common/autotest_common.sh@1187 -- # local i=0 00:08:19.050 11:02:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.050 11:02:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:19.050 11:02:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:20.953 11:02:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:20.953 11:02:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:20.953 11:02:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.953 11:02:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:20.953 11:02:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.953 11:02:41 -- common/autotest_common.sh@1197 -- # return 0 00:08:20.953 11:02:41 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:20.953 11:02:41 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:20.953 11:02:41 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:20.953 11:02:41 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:20.953 11:02:41 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:20.953 11:02:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:20.953 11:02:41 -- setup/common.sh@80 -- # echo 536870912 00:08:20.953 11:02:41 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:20.953 11:02:41 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:20.953 11:02:41 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:20.953 11:02:41 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.211 11:02:41 -- target/filesystem.sh@69 -- # partprobe 00:08:21.211 11:02:41 -- target/filesystem.sh@70 -- # sleep 1 00:08:22.147 11:02:42 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:22.147 11:02:42 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:22.147 11:02:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:22.147 11:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.147 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:08:22.147 ************************************ 00:08:22.147 START TEST filesystem_ext4 00:08:22.147 ************************************ 00:08:22.147 11:02:42 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:22.147 11:02:42 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:22.147 11:02:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.147 11:02:42 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:22.147 11:02:42 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:22.147 11:02:42 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:22.147 11:02:42 -- common/autotest_common.sh@914 -- # local i=0 00:08:22.147 11:02:42 -- common/autotest_common.sh@915 -- # local force 00:08:22.147 11:02:42 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:22.147 11:02:42 -- common/autotest_common.sh@918 -- # force=-F 00:08:22.147 11:02:42 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:22.147 mke2fs 1.47.0 (5-Feb-2023) 00:08:22.147 Discarding device blocks: 0/522240 done 00:08:22.147 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:22.147 Filesystem UUID: c5f1b51a-10a9-413e-8cb4-1ea51fff89ab 00:08:22.147 Superblock backups stored on blocks: 00:08:22.147 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:22.147 00:08:22.147 Allocating group tables: 0/64 done 00:08:22.147 Writing inode tables: 0/64 done 00:08:22.407 Creating journal (8192 blocks): done 00:08:22.407 Writing superblocks and filesystem accounting information: 0/64 done 00:08:22.407 00:08:22.407 11:02:42 -- common/autotest_common.sh@931 -- # return 0 00:08:22.407 11:02:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.407 11:02:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.407 11:02:42 -- target/filesystem.sh@25 -- # sync 00:08:22.407 11:02:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.407 11:02:42 -- target/filesystem.sh@27 -- # sync 00:08:22.407 11:02:42 -- target/filesystem.sh@29 -- # i=0 00:08:22.407 11:02:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.407 11:02:42 -- target/filesystem.sh@37 -- # kill -0 1479289 00:08:22.407 11:02:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.407 11:02:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.407 11:02:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.407 11:02:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.407 00:08:22.407 real 0m0.181s 00:08:22.407 user 0m0.027s 00:08:22.407 sys 0m0.059s 00:08:22.407 11:02:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.407 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:08:22.407 ************************************ 00:08:22.407 END TEST filesystem_ext4 00:08:22.407 ************************************ 00:08:22.407 11:02:42 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:22.407 11:02:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:22.407 11:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.407 11:02:42 -- common/autotest_common.sh@10 -- # set +x 00:08:22.407 ************************************ 00:08:22.407 START TEST filesystem_btrfs 00:08:22.407 ************************************ 00:08:22.407 11:02:42 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:22.407 11:02:42 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:22.407 11:02:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.407 11:02:42 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:22.407 11:02:42 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:22.407 11:02:42 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:22.407 11:02:42 -- common/autotest_common.sh@914 -- # local i=0 00:08:22.407 11:02:42 -- common/autotest_common.sh@915 -- # local force 00:08:22.408 11:02:42 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:22.408 11:02:42 -- common/autotest_common.sh@920 -- # force=-f 00:08:22.408 11:02:42 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:22.408 btrfs-progs v6.8.1 00:08:22.408 See https://btrfs.readthedocs.io for more information. 00:08:22.408 00:08:22.408 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:22.408 NOTE: several default settings have changed in version 5.15, please make sure 00:08:22.408 this does not affect your deployments: 00:08:22.408 - DUP for metadata (-m dup) 00:08:22.408 - enabled no-holes (-O no-holes) 00:08:22.408 - enabled free-space-tree (-R free-space-tree) 00:08:22.408 00:08:22.408 Label: (null) 00:08:22.408 UUID: fc4aa4b7-af34-4d44-a702-c061869c1193 00:08:22.408 Node size: 16384 00:08:22.408 Sector size: 4096 (CPU page size: 4096) 00:08:22.408 Filesystem size: 510.00MiB 00:08:22.408 Block group profiles: 00:08:22.408 Data: single 8.00MiB 00:08:22.408 Metadata: DUP 32.00MiB 00:08:22.408 System: DUP 8.00MiB 00:08:22.408 SSD detected: yes 00:08:22.408 Zoned device: no 00:08:22.408 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:22.408 Checksum: crc32c 00:08:22.408 Number of devices: 1 00:08:22.408 Devices: 00:08:22.408 ID SIZE PATH 00:08:22.408 1 510.00MiB /dev/nvme0n1p1 00:08:22.408 00:08:22.408 11:02:42 -- common/autotest_common.sh@931 -- # return 0 00:08:22.408 11:02:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.667 11:02:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.667 11:02:43 -- target/filesystem.sh@25 -- # sync 00:08:22.667 11:02:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.667 11:02:43 -- target/filesystem.sh@27 -- # sync 00:08:22.667 11:02:43 -- target/filesystem.sh@29 -- # i=0 00:08:22.667 11:02:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.667 11:02:43 -- target/filesystem.sh@37 -- # kill -0 1479289 00:08:22.667 11:02:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.667 11:02:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.667 11:02:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.667 11:02:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.667 00:08:22.667 real 0m0.224s 00:08:22.667 user 0m0.026s 00:08:22.667 sys 0m0.108s 00:08:22.667 11:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.667 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:08:22.667 ************************************ 00:08:22.667 END TEST filesystem_btrfs 00:08:22.667 ************************************ 00:08:22.667 11:02:43 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:22.667 11:02:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:22.667 11:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.667 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:08:22.667 ************************************ 00:08:22.667 START TEST filesystem_xfs 00:08:22.667 ************************************ 00:08:22.667 11:02:43 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:22.667 11:02:43 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:22.667 11:02:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.667 11:02:43 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:22.667 11:02:43 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:22.667 11:02:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:22.667 11:02:43 -- common/autotest_common.sh@914 -- # local i=0 00:08:22.667 11:02:43 -- common/autotest_common.sh@915 -- # local force 00:08:22.667 11:02:43 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:22.667 11:02:43 -- common/autotest_common.sh@920 -- # force=-f 00:08:22.667 11:02:43 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:22.667 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:22.667 = sectsz=512 attr=2, projid32bit=1 00:08:22.667 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:22.667 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:22.667 data = bsize=4096 blocks=130560, imaxpct=25 00:08:22.667 = sunit=0 swidth=0 blks 00:08:22.667 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:22.667 log =internal log bsize=4096 blocks=16384, version=2 00:08:22.667 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:22.667 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:22.926 Discarding blocks...Done. 00:08:22.926 11:02:43 -- common/autotest_common.sh@931 -- # return 0 00:08:22.926 11:02:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.926 11:02:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.926 11:02:43 -- target/filesystem.sh@25 -- # sync 00:08:22.926 11:02:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.926 11:02:43 -- target/filesystem.sh@27 -- # sync 00:08:22.926 11:02:43 -- target/filesystem.sh@29 -- # i=0 00:08:22.926 11:02:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.926 11:02:43 -- target/filesystem.sh@37 -- # kill -0 1479289 00:08:22.926 11:02:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.926 11:02:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.926 11:02:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.926 11:02:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.926 00:08:22.926 real 0m0.204s 00:08:22.926 user 0m0.020s 00:08:22.926 sys 0m0.066s 00:08:22.926 11:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.926 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:08:22.926 ************************************ 00:08:22.926 END TEST filesystem_xfs 00:08:22.926 ************************************ 00:08:22.927 11:02:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:22.927 11:02:43 -- target/filesystem.sh@93 -- # sync 00:08:22.927 11:02:43 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.863 11:02:44 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.863 11:02:44 -- common/autotest_common.sh@1208 -- # local i=0 00:08:23.863 11:02:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:23.863 11:02:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.863 11:02:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:23.863 11:02:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.863 11:02:44 -- common/autotest_common.sh@1220 -- # return 0 00:08:23.863 11:02:44 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.863 11:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.863 11:02:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.863 11:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.863 11:02:44 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:23.863 11:02:44 -- target/filesystem.sh@101 -- # killprocess 1479289 00:08:23.863 11:02:44 -- common/autotest_common.sh@936 -- # '[' -z 1479289 ']' 00:08:23.863 11:02:44 -- common/autotest_common.sh@940 -- # kill -0 1479289 00:08:23.863 11:02:44 -- common/autotest_common.sh@941 -- # uname 00:08:23.863 11:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.863 11:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1479289 00:08:23.863 11:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:23.863 11:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:23.863 11:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1479289' 00:08:23.863 killing process with pid 1479289 00:08:23.863 11:02:44 -- common/autotest_common.sh@955 -- # kill 1479289 00:08:23.863 11:02:44 -- common/autotest_common.sh@960 -- # wait 1479289 00:08:24.432 11:02:44 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:24.432 00:08:24.432 real 0m7.602s 00:08:24.432 user 0m29.585s 00:08:24.432 sys 0m0.977s 00:08:24.432 11:02:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.432 11:02:44 -- common/autotest_common.sh@10 -- # set +x 00:08:24.432 ************************************ 00:08:24.432 END TEST nvmf_filesystem_no_in_capsule 00:08:24.432 ************************************ 00:08:24.432 11:02:44 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:24.432 11:02:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.432 11:02:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.432 11:02:44 -- common/autotest_common.sh@10 -- # set +x 00:08:24.432 ************************************ 00:08:24.432 START TEST nvmf_filesystem_in_capsule 00:08:24.432 ************************************ 00:08:24.432 11:02:44 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:24.432 11:02:44 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:24.432 11:02:44 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:24.432 11:02:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:24.432 11:02:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.432 11:02:44 -- common/autotest_common.sh@10 -- # set +x 00:08:24.432 11:02:44 -- nvmf/common.sh@469 -- # nvmfpid=1480878 00:08:24.432 11:02:44 -- nvmf/common.sh@470 -- # waitforlisten 1480878 00:08:24.432 11:02:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.432 11:02:44 -- common/autotest_common.sh@829 -- # '[' -z 1480878 ']' 00:08:24.432 11:02:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.432 11:02:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.432 11:02:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.432 11:02:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.432 11:02:44 -- common/autotest_common.sh@10 -- # set +x 00:08:24.432 [2024-12-13 11:02:44.912464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.432 [2024-12-13 11:02:44.912529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.432 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.432 [2024-12-13 11:02:44.963353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.694 [2024-12-13 11:02:45.035693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.694 [2024-12-13 11:02:45.035793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.694 [2024-12-13 11:02:45.035801] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.694 [2024-12-13 11:02:45.035807] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.694 [2024-12-13 11:02:45.035848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.694 [2024-12-13 11:02:45.035931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.694 [2024-12-13 11:02:45.036015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.694 [2024-12-13 11:02:45.036017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.262 11:02:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.262 11:02:45 -- common/autotest_common.sh@862 -- # return 0 00:08:25.262 11:02:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:25.262 11:02:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.262 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 11:02:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.262 11:02:45 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:25.262 11:02:45 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:25.262 11:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.262 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 [2024-12-13 11:02:45.772382] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9ed960/0x9f1e50) succeed. 00:08:25.262 [2024-12-13 11:02:45.780580] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9eef50/0xa334f0) succeed. 00:08:25.521 11:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.521 11:02:45 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:25.521 11:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.521 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:25.521 Malloc1 00:08:25.521 11:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.521 11:02:46 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.521 11:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.521 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.521 11:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.521 11:02:46 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:25.521 11:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.521 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.521 11:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.521 11:02:46 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:25.521 11:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.521 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.521 [2024-12-13 11:02:46.027093] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:25.521 11:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.521 11:02:46 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:25.521 11:02:46 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:25.521 11:02:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:25.521 11:02:46 -- common/autotest_common.sh@1369 -- # local bs 00:08:25.521 11:02:46 -- common/autotest_common.sh@1370 -- # local nb 00:08:25.521 11:02:46 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:25.521 11:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.521 11:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.521 11:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.521 11:02:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:25.521 { 00:08:25.521 "name": "Malloc1", 00:08:25.521 "aliases": [ 00:08:25.521 "ab77719b-438c-449a-867f-e2e9911ca248" 00:08:25.521 ], 00:08:25.521 "product_name": "Malloc disk", 00:08:25.521 "block_size": 512, 00:08:25.521 "num_blocks": 1048576, 00:08:25.521 "uuid": "ab77719b-438c-449a-867f-e2e9911ca248", 00:08:25.521 "assigned_rate_limits": { 00:08:25.521 "rw_ios_per_sec": 0, 00:08:25.521 "rw_mbytes_per_sec": 0, 00:08:25.521 "r_mbytes_per_sec": 0, 00:08:25.521 "w_mbytes_per_sec": 0 00:08:25.521 }, 00:08:25.521 "claimed": true, 00:08:25.521 "claim_type": "exclusive_write", 00:08:25.521 "zoned": false, 00:08:25.521 "supported_io_types": { 00:08:25.521 "read": true, 00:08:25.521 "write": true, 00:08:25.521 "unmap": true, 00:08:25.521 "write_zeroes": true, 00:08:25.521 "flush": true, 00:08:25.521 "reset": true, 00:08:25.521 "compare": false, 00:08:25.521 "compare_and_write": false, 00:08:25.521 "abort": true, 00:08:25.521 "nvme_admin": false, 00:08:25.521 "nvme_io": false 00:08:25.521 }, 00:08:25.521 "memory_domains": [ 00:08:25.521 { 00:08:25.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.521 "dma_device_type": 2 00:08:25.521 } 00:08:25.521 ], 00:08:25.521 "driver_specific": {} 00:08:25.521 } 00:08:25.521 ]' 00:08:25.521 11:02:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:25.780 11:02:46 -- common/autotest_common.sh@1372 -- # bs=512 00:08:25.780 11:02:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:25.780 11:02:46 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:25.780 11:02:46 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:25.780 11:02:46 -- common/autotest_common.sh@1377 -- # echo 512 00:08:25.780 11:02:46 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:25.780 11:02:46 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:26.716 11:02:47 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.716 11:02:47 -- common/autotest_common.sh@1187 -- # local i=0 00:08:26.716 11:02:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.716 11:02:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:26.716 11:02:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:28.619 11:02:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:28.619 11:02:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:28.619 11:02:49 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.619 11:02:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:28.619 11:02:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.619 11:02:49 -- common/autotest_common.sh@1197 -- # return 0 00:08:28.619 11:02:49 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:28.619 11:02:49 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:28.619 11:02:49 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:28.619 11:02:49 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:28.619 11:02:49 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:28.619 11:02:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:28.619 11:02:49 -- setup/common.sh@80 -- # echo 536870912 00:08:28.619 11:02:49 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:28.619 11:02:49 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:28.619 11:02:49 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:28.619 11:02:49 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:28.619 11:02:49 -- target/filesystem.sh@69 -- # partprobe 00:08:28.878 11:02:49 -- target/filesystem.sh@70 -- # sleep 1 00:08:29.814 11:02:50 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:29.814 11:02:50 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:29.814 11:02:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:29.814 11:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.814 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:29.814 ************************************ 00:08:29.814 START TEST filesystem_in_capsule_ext4 00:08:29.814 ************************************ 00:08:29.814 11:02:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:29.814 11:02:50 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:29.814 11:02:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.814 11:02:50 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:29.814 11:02:50 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:29.814 11:02:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:29.814 11:02:50 -- common/autotest_common.sh@914 -- # local i=0 00:08:29.814 11:02:50 -- common/autotest_common.sh@915 -- # local force 00:08:29.814 11:02:50 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:29.814 11:02:50 -- common/autotest_common.sh@918 -- # force=-F 00:08:29.815 11:02:50 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:29.815 mke2fs 1.47.0 (5-Feb-2023) 00:08:30.074 Discarding device blocks: 0/522240 done 00:08:30.074 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:30.074 Filesystem UUID: 5935b4bd-394d-43f0-b23d-a140c138054a 00:08:30.074 Superblock backups stored on blocks: 00:08:30.074 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:30.074 00:08:30.074 Allocating group tables: 0/64 done 00:08:30.074 Writing inode tables: 0/64 done 00:08:30.074 Creating journal (8192 blocks): done 00:08:30.074 Writing superblocks and filesystem accounting information: 0/64 done 00:08:30.074 00:08:30.074 11:02:50 -- common/autotest_common.sh@931 -- # return 0 00:08:30.074 11:02:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.074 11:02:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.074 11:02:50 -- target/filesystem.sh@25 -- # sync 00:08:30.074 11:02:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.074 11:02:50 -- target/filesystem.sh@27 -- # sync 00:08:30.074 11:02:50 -- target/filesystem.sh@29 -- # i=0 00:08:30.074 11:02:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.074 11:02:50 -- target/filesystem.sh@37 -- # kill -0 1480878 00:08:30.074 11:02:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.074 11:02:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.074 11:02:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.074 11:02:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.074 00:08:30.074 real 0m0.177s 00:08:30.074 user 0m0.025s 00:08:30.074 sys 0m0.058s 00:08:30.074 11:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.074 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.074 ************************************ 00:08:30.074 END TEST filesystem_in_capsule_ext4 00:08:30.074 ************************************ 00:08:30.074 11:02:50 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:30.074 11:02:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.074 11:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.074 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.074 ************************************ 00:08:30.074 START TEST filesystem_in_capsule_btrfs 00:08:30.074 ************************************ 00:08:30.074 11:02:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:30.074 11:02:50 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:30.074 11:02:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.074 11:02:50 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:30.074 11:02:50 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:30.074 11:02:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:30.074 11:02:50 -- common/autotest_common.sh@914 -- # local i=0 00:08:30.074 11:02:50 -- common/autotest_common.sh@915 -- # local force 00:08:30.074 11:02:50 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:30.074 11:02:50 -- common/autotest_common.sh@920 -- # force=-f 00:08:30.074 11:02:50 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:30.074 btrfs-progs v6.8.1 00:08:30.074 See https://btrfs.readthedocs.io for more information. 00:08:30.074 00:08:30.074 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:30.074 NOTE: several default settings have changed in version 5.15, please make sure 00:08:30.074 this does not affect your deployments: 00:08:30.074 - DUP for metadata (-m dup) 00:08:30.074 - enabled no-holes (-O no-holes) 00:08:30.074 - enabled free-space-tree (-R free-space-tree) 00:08:30.074 00:08:30.074 Label: (null) 00:08:30.074 UUID: 53710b1c-0d94-49a6-a4f4-e2a8bf14f71a 00:08:30.074 Node size: 16384 00:08:30.074 Sector size: 4096 (CPU page size: 4096) 00:08:30.074 Filesystem size: 510.00MiB 00:08:30.074 Block group profiles: 00:08:30.074 Data: single 8.00MiB 00:08:30.074 Metadata: DUP 32.00MiB 00:08:30.074 System: DUP 8.00MiB 00:08:30.074 SSD detected: yes 00:08:30.074 Zoned device: no 00:08:30.074 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:30.074 Checksum: crc32c 00:08:30.074 Number of devices: 1 00:08:30.074 Devices: 00:08:30.074 ID SIZE PATH 00:08:30.074 1 510.00MiB /dev/nvme0n1p1 00:08:30.074 00:08:30.074 11:02:50 -- common/autotest_common.sh@931 -- # return 0 00:08:30.074 11:02:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.334 11:02:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.334 11:02:50 -- target/filesystem.sh@25 -- # sync 00:08:30.334 11:02:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.334 11:02:50 -- target/filesystem.sh@27 -- # sync 00:08:30.334 11:02:50 -- target/filesystem.sh@29 -- # i=0 00:08:30.334 11:02:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.334 11:02:50 -- target/filesystem.sh@37 -- # kill -0 1480878 00:08:30.334 11:02:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.334 11:02:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.334 11:02:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.334 11:02:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.334 00:08:30.334 real 0m0.220s 00:08:30.334 user 0m0.014s 00:08:30.334 sys 0m0.115s 00:08:30.334 11:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.334 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.334 ************************************ 00:08:30.334 END TEST filesystem_in_capsule_btrfs 00:08:30.334 ************************************ 00:08:30.334 11:02:50 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:30.334 11:02:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.334 11:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.334 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.334 ************************************ 00:08:30.334 START TEST filesystem_in_capsule_xfs 00:08:30.334 ************************************ 00:08:30.334 11:02:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:30.334 11:02:50 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:30.334 11:02:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.334 11:02:50 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:30.334 11:02:50 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:30.334 11:02:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:30.334 11:02:50 -- common/autotest_common.sh@914 -- # local i=0 00:08:30.334 11:02:50 -- common/autotest_common.sh@915 -- # local force 00:08:30.334 11:02:50 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:30.334 11:02:50 -- common/autotest_common.sh@920 -- # force=-f 00:08:30.334 11:02:50 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:30.334 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:30.334 = sectsz=512 attr=2, projid32bit=1 00:08:30.334 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:30.334 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:30.334 data = bsize=4096 blocks=130560, imaxpct=25 00:08:30.334 = sunit=0 swidth=0 blks 00:08:30.334 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:30.334 log =internal log bsize=4096 blocks=16384, version=2 00:08:30.334 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:30.334 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:30.334 Discarding blocks...Done. 00:08:30.334 11:02:50 -- common/autotest_common.sh@931 -- # return 0 00:08:30.334 11:02:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.593 11:02:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.593 11:02:50 -- target/filesystem.sh@25 -- # sync 00:08:30.593 11:02:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.593 11:02:50 -- target/filesystem.sh@27 -- # sync 00:08:30.593 11:02:50 -- target/filesystem.sh@29 -- # i=0 00:08:30.593 11:02:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.593 11:02:50 -- target/filesystem.sh@37 -- # kill -0 1480878 00:08:30.593 11:02:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.593 11:02:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.593 11:02:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.593 11:02:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.593 00:08:30.593 real 0m0.173s 00:08:30.593 user 0m0.026s 00:08:30.593 sys 0m0.057s 00:08:30.593 11:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.593 11:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.593 ************************************ 00:08:30.593 END TEST filesystem_in_capsule_xfs 00:08:30.593 ************************************ 00:08:30.593 11:02:50 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:30.593 11:02:51 -- target/filesystem.sh@93 -- # sync 00:08:30.593 11:02:51 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.531 11:02:51 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.531 11:02:51 -- common/autotest_common.sh@1208 -- # local i=0 00:08:31.531 11:02:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:31.531 11:02:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.531 11:02:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:31.531 11:02:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.531 11:02:51 -- common/autotest_common.sh@1220 -- # return 0 00:08:31.531 11:02:51 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.531 11:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.531 11:02:51 -- common/autotest_common.sh@10 -- # set +x 00:08:31.531 11:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.531 11:02:51 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:31.531 11:02:51 -- target/filesystem.sh@101 -- # killprocess 1480878 00:08:31.531 11:02:51 -- common/autotest_common.sh@936 -- # '[' -z 1480878 ']' 00:08:31.531 11:02:51 -- common/autotest_common.sh@940 -- # kill -0 1480878 00:08:31.531 11:02:51 -- common/autotest_common.sh@941 -- # uname 00:08:31.531 11:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.531 11:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1480878 00:08:31.531 11:02:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:31.531 11:02:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:31.531 11:02:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1480878' 00:08:31.531 killing process with pid 1480878 00:08:31.531 11:02:52 -- common/autotest_common.sh@955 -- # kill 1480878 00:08:31.531 11:02:52 -- common/autotest_common.sh@960 -- # wait 1480878 00:08:32.099 11:02:52 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:32.099 00:08:32.099 real 0m7.603s 00:08:32.099 user 0m29.522s 00:08:32.099 sys 0m1.022s 00:08:32.099 11:02:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.099 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:08:32.099 ************************************ 00:08:32.099 END TEST nvmf_filesystem_in_capsule 00:08:32.099 ************************************ 00:08:32.099 11:02:52 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:32.099 11:02:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:32.099 11:02:52 -- nvmf/common.sh@116 -- # sync 00:08:32.099 11:02:52 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:32.099 11:02:52 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:32.099 11:02:52 -- nvmf/common.sh@119 -- # set +e 00:08:32.099 11:02:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:32.099 11:02:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:32.099 rmmod nvme_rdma 00:08:32.099 rmmod nvme_fabrics 00:08:32.099 11:02:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:32.099 11:02:52 -- nvmf/common.sh@123 -- # set -e 00:08:32.099 11:02:52 -- nvmf/common.sh@124 -- # return 0 00:08:32.099 11:02:52 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:32.099 11:02:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:32.099 11:02:52 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:32.099 00:08:32.099 real 0m21.625s 00:08:32.099 user 1m1.066s 00:08:32.099 sys 0m6.640s 00:08:32.099 11:02:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.099 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:08:32.099 ************************************ 00:08:32.099 END TEST nvmf_filesystem 00:08:32.100 ************************************ 00:08:32.100 11:02:52 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:32.100 11:02:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:32.100 11:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.100 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:08:32.100 ************************************ 00:08:32.100 START TEST nvmf_discovery 00:08:32.100 ************************************ 00:08:32.100 11:02:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:32.100 * Looking for test storage... 00:08:32.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:32.100 11:02:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:32.100 11:02:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:32.100 11:02:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:32.359 11:02:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:32.359 11:02:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:32.359 11:02:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:32.359 11:02:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:32.359 11:02:52 -- scripts/common.sh@335 -- # IFS=.-: 00:08:32.359 11:02:52 -- scripts/common.sh@335 -- # read -ra ver1 00:08:32.359 11:02:52 -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.359 11:02:52 -- scripts/common.sh@336 -- # read -ra ver2 00:08:32.359 11:02:52 -- scripts/common.sh@337 -- # local 'op=<' 00:08:32.359 11:02:52 -- scripts/common.sh@339 -- # ver1_l=2 00:08:32.359 11:02:52 -- scripts/common.sh@340 -- # ver2_l=1 00:08:32.359 11:02:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:32.359 11:02:52 -- scripts/common.sh@343 -- # case "$op" in 00:08:32.359 11:02:52 -- scripts/common.sh@344 -- # : 1 00:08:32.359 11:02:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:32.359 11:02:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.359 11:02:52 -- scripts/common.sh@364 -- # decimal 1 00:08:32.359 11:02:52 -- scripts/common.sh@352 -- # local d=1 00:08:32.359 11:02:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.359 11:02:52 -- scripts/common.sh@354 -- # echo 1 00:08:32.359 11:02:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:32.359 11:02:52 -- scripts/common.sh@365 -- # decimal 2 00:08:32.359 11:02:52 -- scripts/common.sh@352 -- # local d=2 00:08:32.359 11:02:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.359 11:02:52 -- scripts/common.sh@354 -- # echo 2 00:08:32.359 11:02:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:32.359 11:02:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:32.359 11:02:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:32.359 11:02:52 -- scripts/common.sh@367 -- # return 0 00:08:32.359 11:02:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.359 11:02:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.359 --rc genhtml_branch_coverage=1 00:08:32.359 --rc genhtml_function_coverage=1 00:08:32.359 --rc genhtml_legend=1 00:08:32.359 --rc geninfo_all_blocks=1 00:08:32.359 --rc geninfo_unexecuted_blocks=1 00:08:32.359 00:08:32.359 ' 00:08:32.359 11:02:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.359 --rc genhtml_branch_coverage=1 00:08:32.359 --rc genhtml_function_coverage=1 00:08:32.359 --rc genhtml_legend=1 00:08:32.359 --rc geninfo_all_blocks=1 00:08:32.359 --rc geninfo_unexecuted_blocks=1 00:08:32.359 00:08:32.359 ' 00:08:32.359 11:02:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.359 --rc genhtml_branch_coverage=1 00:08:32.359 --rc genhtml_function_coverage=1 00:08:32.359 --rc genhtml_legend=1 00:08:32.359 --rc geninfo_all_blocks=1 00:08:32.359 --rc geninfo_unexecuted_blocks=1 00:08:32.359 00:08:32.359 ' 00:08:32.359 11:02:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.359 --rc genhtml_branch_coverage=1 00:08:32.359 --rc genhtml_function_coverage=1 00:08:32.359 --rc genhtml_legend=1 00:08:32.359 --rc geninfo_all_blocks=1 00:08:32.359 --rc geninfo_unexecuted_blocks=1 00:08:32.359 00:08:32.359 ' 00:08:32.359 11:02:52 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.359 11:02:52 -- nvmf/common.sh@7 -- # uname -s 00:08:32.359 11:02:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.359 11:02:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.359 11:02:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.359 11:02:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.359 11:02:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.359 11:02:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.359 11:02:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.359 11:02:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.359 11:02:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.359 11:02:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.359 11:02:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:32.359 11:02:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:32.359 11:02:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.359 11:02:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.359 11:02:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.359 11:02:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:32.359 11:02:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.359 11:02:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.359 11:02:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.359 11:02:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.359 11:02:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.359 11:02:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.359 11:02:52 -- paths/export.sh@5 -- # export PATH 00:08:32.359 11:02:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.359 11:02:52 -- nvmf/common.sh@46 -- # : 0 00:08:32.359 11:02:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.360 11:02:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.360 11:02:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.360 11:02:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.360 11:02:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.360 11:02:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.360 11:02:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.360 11:02:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.360 11:02:52 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:32.360 11:02:52 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:32.360 11:02:52 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:32.360 11:02:52 -- target/discovery.sh@15 -- # hash nvme 00:08:32.360 11:02:52 -- target/discovery.sh@20 -- # nvmftestinit 00:08:32.360 11:02:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:32.360 11:02:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.360 11:02:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:32.360 11:02:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:32.360 11:02:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:32.360 11:02:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.360 11:02:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.360 11:02:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.360 11:02:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:32.360 11:02:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:32.360 11:02:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:32.360 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:08:37.637 11:02:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:37.637 11:02:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:37.637 11:02:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:37.637 11:02:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:37.637 11:02:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:37.637 11:02:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:37.637 11:02:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:37.637 11:02:57 -- nvmf/common.sh@294 -- # net_devs=() 00:08:37.637 11:02:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:37.637 11:02:57 -- nvmf/common.sh@295 -- # e810=() 00:08:37.637 11:02:57 -- nvmf/common.sh@295 -- # local -ga e810 00:08:37.637 11:02:57 -- nvmf/common.sh@296 -- # x722=() 00:08:37.637 11:02:57 -- nvmf/common.sh@296 -- # local -ga x722 00:08:37.637 11:02:57 -- nvmf/common.sh@297 -- # mlx=() 00:08:37.637 11:02:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:37.637 11:02:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.637 11:02:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:37.637 11:02:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:37.637 11:02:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:37.637 11:02:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:37.637 11:02:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:37.637 11:02:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:37.637 11:02:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:37.637 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:37.637 11:02:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.637 11:02:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:37.637 11:02:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:37.637 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:37.637 11:02:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.637 11:02:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:37.637 11:02:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:37.637 11:02:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.637 11:02:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:37.637 11:02:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.637 11:02:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:37.637 Found net devices under 0000:18:00.0: mlx_0_0 00:08:37.637 11:02:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.637 11:02:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:37.637 11:02:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.637 11:02:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:37.637 11:02:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.637 11:02:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:37.637 Found net devices under 0000:18:00.1: mlx_0_1 00:08:37.637 11:02:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.637 11:02:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:37.637 11:02:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:37.637 11:02:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:37.637 11:02:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:37.637 11:02:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:37.637 11:02:57 -- nvmf/common.sh@57 -- # uname 00:08:37.637 11:02:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:37.637 11:02:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:37.637 11:02:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:37.637 11:02:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:37.637 11:02:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:37.637 11:02:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:37.637 11:02:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:37.637 11:02:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:37.637 11:02:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:37.637 11:02:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:37.637 11:02:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:37.637 11:02:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.637 11:02:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:37.637 11:02:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:37.637 11:02:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.637 11:02:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:37.637 11:02:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:37.637 11:02:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.637 11:02:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.637 11:02:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:37.637 11:02:58 -- nvmf/common.sh@104 -- # continue 2 00:08:37.637 11:02:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@104 -- # continue 2 00:08:37.638 11:02:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:37.638 11:02:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:37.638 11:02:58 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:37.638 11:02:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:37.638 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.638 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:37.638 altname enp24s0f0np0 00:08:37.638 altname ens785f0np0 00:08:37.638 inet 192.168.100.8/24 scope global mlx_0_0 00:08:37.638 valid_lft forever preferred_lft forever 00:08:37.638 11:02:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:37.638 11:02:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:37.638 11:02:58 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:37.638 11:02:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:37.638 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.638 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:37.638 altname enp24s0f1np1 00:08:37.638 altname ens785f1np1 00:08:37.638 inet 192.168.100.9/24 scope global mlx_0_1 00:08:37.638 valid_lft forever preferred_lft forever 00:08:37.638 11:02:58 -- nvmf/common.sh@410 -- # return 0 00:08:37.638 11:02:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:37.638 11:02:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:37.638 11:02:58 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:37.638 11:02:58 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:37.638 11:02:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.638 11:02:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:37.638 11:02:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:37.638 11:02:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.638 11:02:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:37.638 11:02:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@104 -- # continue 2 00:08:37.638 11:02:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.638 11:02:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.638 11:02:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@104 -- # continue 2 00:08:37.638 11:02:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:37.638 11:02:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:37.638 11:02:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:37.638 11:02:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:37.638 11:02:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:37.638 11:02:58 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:37.638 192.168.100.9' 00:08:37.638 11:02:58 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:37.638 192.168.100.9' 00:08:37.638 11:02:58 -- nvmf/common.sh@445 -- # head -n 1 00:08:37.638 11:02:58 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:37.638 11:02:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:37.638 192.168.100.9' 00:08:37.638 11:02:58 -- nvmf/common.sh@446 -- # tail -n +2 00:08:37.638 11:02:58 -- nvmf/common.sh@446 -- # head -n 1 00:08:37.638 11:02:58 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:37.638 11:02:58 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:37.638 11:02:58 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:37.638 11:02:58 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:37.638 11:02:58 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:37.638 11:02:58 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:37.638 11:02:58 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:37.638 11:02:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:37.638 11:02:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.638 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:37.638 11:02:58 -- nvmf/common.sh@469 -- # nvmfpid=1485763 00:08:37.638 11:02:58 -- nvmf/common.sh@470 -- # waitforlisten 1485763 00:08:37.638 11:02:58 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.638 11:02:58 -- common/autotest_common.sh@829 -- # '[' -z 1485763 ']' 00:08:37.638 11:02:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.638 11:02:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.638 11:02:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.638 11:02:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.638 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:37.638 [2024-12-13 11:02:58.192742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.638 [2024-12-13 11:02:58.192786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.897 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.897 [2024-12-13 11:02:58.245584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.898 [2024-12-13 11:02:58.315757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:37.898 [2024-12-13 11:02:58.315863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.898 [2024-12-13 11:02:58.315870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.898 [2024-12-13 11:02:58.315880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.898 [2024-12-13 11:02:58.315926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.898 [2024-12-13 11:02:58.316021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.898 [2024-12-13 11:02:58.316083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.898 [2024-12-13 11:02:58.316084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.465 11:02:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.465 11:02:58 -- common/autotest_common.sh@862 -- # return 0 00:08:38.465 11:02:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:38.465 11:02:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.465 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:38.465 11:02:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.465 11:02:59 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:38.465 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.465 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 [2024-12-13 11:02:59.044891] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x207a960/0x207ee50) succeed. 00:08:38.724 [2024-12-13 11:02:59.053030] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x207bf50/0x20c04f0) succeed. 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@26 -- # seq 1 4 00:08:38.724 11:02:59 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.724 11:02:59 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 Null1 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 [2024-12-13 11:02:59.200188] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.724 11:02:59 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 Null2 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.724 11:02:59 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 Null3 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.724 11:02:59 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 Null4 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.724 11:02:59 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:38.724 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.724 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.984 11:02:59 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:38.984 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.984 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.984 11:02:59 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:38.984 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.984 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.984 11:02:59 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:38.984 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.984 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.984 11:02:59 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:38.984 00:08:38.984 Discovery Log Number of Records 6, Generation counter 6 00:08:38.984 =====Discovery Log Entry 0====== 00:08:38.984 trtype: rdma 00:08:38.984 adrfam: ipv4 00:08:38.984 subtype: current discovery subsystem 00:08:38.984 treq: not required 00:08:38.984 portid: 0 00:08:38.984 trsvcid: 4420 00:08:38.984 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.984 traddr: 192.168.100.8 00:08:38.984 eflags: explicit discovery connections, duplicate discovery information 00:08:38.984 rdma_prtype: not specified 00:08:38.984 rdma_qptype: connected 00:08:38.984 rdma_cms: rdma-cm 00:08:38.984 rdma_pkey: 0x0000 00:08:38.984 =====Discovery Log Entry 1====== 00:08:38.984 trtype: rdma 00:08:38.984 adrfam: ipv4 00:08:38.984 subtype: nvme subsystem 00:08:38.984 treq: not required 00:08:38.984 portid: 0 00:08:38.984 trsvcid: 4420 00:08:38.984 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:38.984 traddr: 192.168.100.8 00:08:38.984 eflags: none 00:08:38.984 rdma_prtype: not specified 00:08:38.984 rdma_qptype: connected 00:08:38.984 rdma_cms: rdma-cm 00:08:38.984 rdma_pkey: 0x0000 00:08:38.984 =====Discovery Log Entry 2====== 00:08:38.984 trtype: rdma 00:08:38.984 adrfam: ipv4 00:08:38.984 subtype: nvme subsystem 00:08:38.984 treq: not required 00:08:38.984 portid: 0 00:08:38.984 trsvcid: 4420 00:08:38.984 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:38.984 traddr: 192.168.100.8 00:08:38.984 eflags: none 00:08:38.984 rdma_prtype: not specified 00:08:38.984 rdma_qptype: connected 00:08:38.984 rdma_cms: rdma-cm 00:08:38.984 rdma_pkey: 0x0000 00:08:38.984 =====Discovery Log Entry 3====== 00:08:38.984 trtype: rdma 00:08:38.984 adrfam: ipv4 00:08:38.984 subtype: nvme subsystem 00:08:38.984 treq: not required 00:08:38.984 portid: 0 00:08:38.984 trsvcid: 4420 00:08:38.984 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:38.984 traddr: 192.168.100.8 00:08:38.984 eflags: none 00:08:38.984 rdma_prtype: not specified 00:08:38.984 rdma_qptype: connected 00:08:38.984 rdma_cms: rdma-cm 00:08:38.984 rdma_pkey: 0x0000 00:08:38.984 =====Discovery Log Entry 4====== 00:08:38.984 trtype: rdma 00:08:38.984 adrfam: ipv4 00:08:38.984 subtype: nvme subsystem 00:08:38.984 treq: not required 00:08:38.984 portid: 0 00:08:38.984 trsvcid: 4420 00:08:38.984 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:38.984 traddr: 192.168.100.8 00:08:38.984 eflags: none 00:08:38.984 rdma_prtype: not specified 00:08:38.984 rdma_qptype: connected 00:08:38.984 rdma_cms: rdma-cm 00:08:38.984 rdma_pkey: 0x0000 00:08:38.984 =====Discovery Log Entry 5====== 00:08:38.984 trtype: rdma 00:08:38.984 adrfam: ipv4 00:08:38.984 subtype: discovery subsystem referral 00:08:38.984 treq: not required 00:08:38.984 portid: 0 00:08:38.984 trsvcid: 4430 00:08:38.984 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.984 traddr: 192.168.100.8 00:08:38.984 eflags: none 00:08:38.984 rdma_prtype: unrecognized 00:08:38.984 rdma_qptype: unrecognized 00:08:38.984 rdma_cms: unrecognized 00:08:38.984 rdma_pkey: 0x0000 00:08:38.984 11:02:59 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:38.984 Perform nvmf subsystem discovery via RPC 00:08:38.984 11:02:59 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:38.984 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.984 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 [2024-12-13 11:02:59.420640] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:38.984 [ 00:08:38.984 { 00:08:38.984 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:38.984 "subtype": "Discovery", 00:08:38.984 "listen_addresses": [ 00:08:38.984 { 00:08:38.984 "transport": "RDMA", 00:08:38.984 "trtype": "RDMA", 00:08:38.984 "adrfam": "IPv4", 00:08:38.984 "traddr": "192.168.100.8", 00:08:38.984 "trsvcid": "4420" 00:08:38.984 } 00:08:38.984 ], 00:08:38.984 "allow_any_host": true, 00:08:38.984 "hosts": [] 00:08:38.984 }, 00:08:38.984 { 00:08:38.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.984 "subtype": "NVMe", 00:08:38.984 "listen_addresses": [ 00:08:38.984 { 00:08:38.984 "transport": "RDMA", 00:08:38.984 "trtype": "RDMA", 00:08:38.984 "adrfam": "IPv4", 00:08:38.984 "traddr": "192.168.100.8", 00:08:38.984 "trsvcid": "4420" 00:08:38.984 } 00:08:38.984 ], 00:08:38.984 "allow_any_host": true, 00:08:38.984 "hosts": [], 00:08:38.984 "serial_number": "SPDK00000000000001", 00:08:38.984 "model_number": "SPDK bdev Controller", 00:08:38.984 "max_namespaces": 32, 00:08:38.984 "min_cntlid": 1, 00:08:38.984 "max_cntlid": 65519, 00:08:38.984 "namespaces": [ 00:08:38.984 { 00:08:38.984 "nsid": 1, 00:08:38.984 "bdev_name": "Null1", 00:08:38.984 "name": "Null1", 00:08:38.984 "nguid": "78434F240E574F24BC8B7417FD5B6F88", 00:08:38.984 "uuid": "78434f24-0e57-4f24-bc8b-7417fd5b6f88" 00:08:38.984 } 00:08:38.984 ] 00:08:38.984 }, 00:08:38.984 { 00:08:38.984 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.984 "subtype": "NVMe", 00:08:38.984 "listen_addresses": [ 00:08:38.984 { 00:08:38.984 "transport": "RDMA", 00:08:38.984 "trtype": "RDMA", 00:08:38.984 "adrfam": "IPv4", 00:08:38.984 "traddr": "192.168.100.8", 00:08:38.984 "trsvcid": "4420" 00:08:38.984 } 00:08:38.984 ], 00:08:38.984 "allow_any_host": true, 00:08:38.984 "hosts": [], 00:08:38.984 "serial_number": "SPDK00000000000002", 00:08:38.985 "model_number": "SPDK bdev Controller", 00:08:38.985 "max_namespaces": 32, 00:08:38.985 "min_cntlid": 1, 00:08:38.985 "max_cntlid": 65519, 00:08:38.985 "namespaces": [ 00:08:38.985 { 00:08:38.985 "nsid": 1, 00:08:38.985 "bdev_name": "Null2", 00:08:38.985 "name": "Null2", 00:08:38.985 "nguid": "EE6CCDD7E23F45BEA3544CB0B211F1C8", 00:08:38.985 "uuid": "ee6ccdd7-e23f-45be-a354-4cb0b211f1c8" 00:08:38.985 } 00:08:38.985 ] 00:08:38.985 }, 00:08:38.985 { 00:08:38.985 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:38.985 "subtype": "NVMe", 00:08:38.985 "listen_addresses": [ 00:08:38.985 { 00:08:38.985 "transport": "RDMA", 00:08:38.985 "trtype": "RDMA", 00:08:38.985 "adrfam": "IPv4", 00:08:38.985 "traddr": "192.168.100.8", 00:08:38.985 "trsvcid": "4420" 00:08:38.985 } 00:08:38.985 ], 00:08:38.985 "allow_any_host": true, 00:08:38.985 "hosts": [], 00:08:38.985 "serial_number": "SPDK00000000000003", 00:08:38.985 "model_number": "SPDK bdev Controller", 00:08:38.985 "max_namespaces": 32, 00:08:38.985 "min_cntlid": 1, 00:08:38.985 "max_cntlid": 65519, 00:08:38.985 "namespaces": [ 00:08:38.985 { 00:08:38.985 "nsid": 1, 00:08:38.985 "bdev_name": "Null3", 00:08:38.985 "name": "Null3", 00:08:38.985 "nguid": "A227DB682B2A498BA728A34A5F01D2AB", 00:08:38.985 "uuid": "a227db68-2b2a-498b-a728-a34a5f01d2ab" 00:08:38.985 } 00:08:38.985 ] 00:08:38.985 }, 00:08:38.985 { 00:08:38.985 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:38.985 "subtype": "NVMe", 00:08:38.985 "listen_addresses": [ 00:08:38.985 { 00:08:38.985 "transport": "RDMA", 00:08:38.985 "trtype": "RDMA", 00:08:38.985 "adrfam": "IPv4", 00:08:38.985 "traddr": "192.168.100.8", 00:08:38.985 "trsvcid": "4420" 00:08:38.985 } 00:08:38.985 ], 00:08:38.985 "allow_any_host": true, 00:08:38.985 "hosts": [], 00:08:38.985 "serial_number": "SPDK00000000000004", 00:08:38.985 "model_number": "SPDK bdev Controller", 00:08:38.985 "max_namespaces": 32, 00:08:38.985 "min_cntlid": 1, 00:08:38.985 "max_cntlid": 65519, 00:08:38.985 "namespaces": [ 00:08:38.985 { 00:08:38.985 "nsid": 1, 00:08:38.985 "bdev_name": "Null4", 00:08:38.985 "name": "Null4", 00:08:38.985 "nguid": "D407538C028346FFA472E735C7ADE6C4", 00:08:38.985 "uuid": "d407538c-0283-46ff-a472-e735c7ade6c4" 00:08:38.985 } 00:08:38.985 ] 00:08:38.985 } 00:08:38.985 ] 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@42 -- # seq 1 4 00:08:38.985 11:02:59 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.985 11:02:59 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.985 11:02:59 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.985 11:02:59 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.985 11:02:59 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.985 11:02:59 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:38.985 11:02:59 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:38.985 11:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.985 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.985 11:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.244 11:02:59 -- target/discovery.sh@49 -- # check_bdevs= 00:08:39.244 11:02:59 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:39.244 11:02:59 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:39.244 11:02:59 -- target/discovery.sh@57 -- # nvmftestfini 00:08:39.244 11:02:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:39.244 11:02:59 -- nvmf/common.sh@116 -- # sync 00:08:39.244 11:02:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:39.244 11:02:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:39.244 11:02:59 -- nvmf/common.sh@119 -- # set +e 00:08:39.244 11:02:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:39.244 11:02:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:39.244 rmmod nvme_rdma 00:08:39.244 rmmod nvme_fabrics 00:08:39.244 11:02:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:39.244 11:02:59 -- nvmf/common.sh@123 -- # set -e 00:08:39.244 11:02:59 -- nvmf/common.sh@124 -- # return 0 00:08:39.244 11:02:59 -- nvmf/common.sh@477 -- # '[' -n 1485763 ']' 00:08:39.244 11:02:59 -- nvmf/common.sh@478 -- # killprocess 1485763 00:08:39.244 11:02:59 -- common/autotest_common.sh@936 -- # '[' -z 1485763 ']' 00:08:39.244 11:02:59 -- common/autotest_common.sh@940 -- # kill -0 1485763 00:08:39.244 11:02:59 -- common/autotest_common.sh@941 -- # uname 00:08:39.244 11:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.244 11:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1485763 00:08:39.244 11:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.244 11:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.244 11:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1485763' 00:08:39.244 killing process with pid 1485763 00:08:39.244 11:02:59 -- common/autotest_common.sh@955 -- # kill 1485763 00:08:39.244 [2024-12-13 11:02:59.668167] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:39.244 11:02:59 -- common/autotest_common.sh@960 -- # wait 1485763 00:08:39.503 11:02:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:39.503 11:02:59 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:39.503 00:08:39.503 real 0m7.350s 00:08:39.503 user 0m7.935s 00:08:39.503 sys 0m4.525s 00:08:39.503 11:02:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.503 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.503 ************************************ 00:08:39.503 END TEST nvmf_discovery 00:08:39.503 ************************************ 00:08:39.503 11:02:59 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:39.503 11:02:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:39.503 11:02:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.503 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:39.503 ************************************ 00:08:39.503 START TEST nvmf_referrals 00:08:39.503 ************************************ 00:08:39.503 11:02:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:39.503 * Looking for test storage... 00:08:39.503 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:39.503 11:03:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:39.503 11:03:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:39.503 11:03:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:39.762 11:03:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:39.762 11:03:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:39.762 11:03:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:39.762 11:03:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:39.762 11:03:00 -- scripts/common.sh@335 -- # IFS=.-: 00:08:39.762 11:03:00 -- scripts/common.sh@335 -- # read -ra ver1 00:08:39.762 11:03:00 -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.762 11:03:00 -- scripts/common.sh@336 -- # read -ra ver2 00:08:39.762 11:03:00 -- scripts/common.sh@337 -- # local 'op=<' 00:08:39.762 11:03:00 -- scripts/common.sh@339 -- # ver1_l=2 00:08:39.762 11:03:00 -- scripts/common.sh@340 -- # ver2_l=1 00:08:39.762 11:03:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:39.762 11:03:00 -- scripts/common.sh@343 -- # case "$op" in 00:08:39.762 11:03:00 -- scripts/common.sh@344 -- # : 1 00:08:39.762 11:03:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:39.762 11:03:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.762 11:03:00 -- scripts/common.sh@364 -- # decimal 1 00:08:39.762 11:03:00 -- scripts/common.sh@352 -- # local d=1 00:08:39.762 11:03:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.762 11:03:00 -- scripts/common.sh@354 -- # echo 1 00:08:39.762 11:03:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:39.762 11:03:00 -- scripts/common.sh@365 -- # decimal 2 00:08:39.762 11:03:00 -- scripts/common.sh@352 -- # local d=2 00:08:39.762 11:03:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.762 11:03:00 -- scripts/common.sh@354 -- # echo 2 00:08:39.762 11:03:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:39.762 11:03:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:39.762 11:03:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:39.762 11:03:00 -- scripts/common.sh@367 -- # return 0 00:08:39.762 11:03:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.762 11:03:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:39.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.762 --rc genhtml_branch_coverage=1 00:08:39.762 --rc genhtml_function_coverage=1 00:08:39.762 --rc genhtml_legend=1 00:08:39.762 --rc geninfo_all_blocks=1 00:08:39.762 --rc geninfo_unexecuted_blocks=1 00:08:39.762 00:08:39.762 ' 00:08:39.762 11:03:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:39.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.762 --rc genhtml_branch_coverage=1 00:08:39.762 --rc genhtml_function_coverage=1 00:08:39.762 --rc genhtml_legend=1 00:08:39.762 --rc geninfo_all_blocks=1 00:08:39.762 --rc geninfo_unexecuted_blocks=1 00:08:39.762 00:08:39.762 ' 00:08:39.762 11:03:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:39.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.762 --rc genhtml_branch_coverage=1 00:08:39.762 --rc genhtml_function_coverage=1 00:08:39.762 --rc genhtml_legend=1 00:08:39.762 --rc geninfo_all_blocks=1 00:08:39.762 --rc geninfo_unexecuted_blocks=1 00:08:39.762 00:08:39.762 ' 00:08:39.762 11:03:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:39.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.762 --rc genhtml_branch_coverage=1 00:08:39.762 --rc genhtml_function_coverage=1 00:08:39.762 --rc genhtml_legend=1 00:08:39.762 --rc geninfo_all_blocks=1 00:08:39.762 --rc geninfo_unexecuted_blocks=1 00:08:39.762 00:08:39.762 ' 00:08:39.763 11:03:00 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.763 11:03:00 -- nvmf/common.sh@7 -- # uname -s 00:08:39.763 11:03:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.763 11:03:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.763 11:03:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.763 11:03:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.763 11:03:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.763 11:03:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.763 11:03:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.763 11:03:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.763 11:03:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.763 11:03:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.763 11:03:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:39.763 11:03:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:39.763 11:03:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.763 11:03:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.763 11:03:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.763 11:03:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:39.763 11:03:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.763 11:03:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.763 11:03:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.763 11:03:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.763 11:03:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.763 11:03:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.763 11:03:00 -- paths/export.sh@5 -- # export PATH 00:08:39.763 11:03:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.763 11:03:00 -- nvmf/common.sh@46 -- # : 0 00:08:39.763 11:03:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:39.763 11:03:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:39.763 11:03:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:39.763 11:03:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.763 11:03:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.763 11:03:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:39.763 11:03:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:39.763 11:03:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:39.763 11:03:00 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:39.763 11:03:00 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:39.763 11:03:00 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:39.763 11:03:00 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:39.763 11:03:00 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:39.763 11:03:00 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:39.763 11:03:00 -- target/referrals.sh@37 -- # nvmftestinit 00:08:39.763 11:03:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:39.763 11:03:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.763 11:03:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:39.763 11:03:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:39.763 11:03:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:39.763 11:03:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.763 11:03:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.763 11:03:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.763 11:03:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:39.763 11:03:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:39.763 11:03:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:39.763 11:03:00 -- common/autotest_common.sh@10 -- # set +x 00:08:46.330 11:03:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.330 11:03:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.330 11:03:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.330 11:03:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.330 11:03:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.330 11:03:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.330 11:03:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.330 11:03:05 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.330 11:03:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.330 11:03:05 -- nvmf/common.sh@295 -- # e810=() 00:08:46.330 11:03:05 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.330 11:03:05 -- nvmf/common.sh@296 -- # x722=() 00:08:46.330 11:03:05 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.330 11:03:05 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.330 11:03:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.330 11:03:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.330 11:03:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.331 11:03:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:46.331 11:03:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:46.331 11:03:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:46.331 11:03:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:46.331 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:46.331 11:03:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:46.331 11:03:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:46.331 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:46.331 11:03:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:46.331 11:03:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.331 11:03:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.331 11:03:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:46.331 Found net devices under 0000:18:00.0: mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.331 11:03:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.331 11:03:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.331 11:03:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:46.331 Found net devices under 0000:18:00.1: mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.331 11:03:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.331 11:03:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:46.331 11:03:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:46.331 11:03:05 -- nvmf/common.sh@57 -- # uname 00:08:46.331 11:03:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:46.331 11:03:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:46.331 11:03:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:46.331 11:03:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:46.331 11:03:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:46.331 11:03:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:46.331 11:03:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:46.331 11:03:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:46.331 11:03:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:46.331 11:03:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:46.331 11:03:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:46.331 11:03:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:46.331 11:03:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:46.331 11:03:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:46.331 11:03:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:46.331 11:03:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@104 -- # continue 2 00:08:46.331 11:03:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@104 -- # continue 2 00:08:46.331 11:03:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:46.331 11:03:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:46.331 11:03:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:46.331 11:03:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:46.331 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:46.331 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:46.331 altname enp24s0f0np0 00:08:46.331 altname ens785f0np0 00:08:46.331 inet 192.168.100.8/24 scope global mlx_0_0 00:08:46.331 valid_lft forever preferred_lft forever 00:08:46.331 11:03:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:46.331 11:03:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:46.331 11:03:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:46.331 11:03:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:46.331 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:46.331 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:46.331 altname enp24s0f1np1 00:08:46.331 altname ens785f1np1 00:08:46.331 inet 192.168.100.9/24 scope global mlx_0_1 00:08:46.331 valid_lft forever preferred_lft forever 00:08:46.331 11:03:05 -- nvmf/common.sh@410 -- # return 0 00:08:46.331 11:03:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.331 11:03:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:46.331 11:03:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:46.331 11:03:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:46.331 11:03:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:46.331 11:03:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:46.331 11:03:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:46.331 11:03:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:46.331 11:03:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:46.331 11:03:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@104 -- # continue 2 00:08:46.331 11:03:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:46.331 11:03:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:46.331 11:03:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@104 -- # continue 2 00:08:46.331 11:03:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:46.331 11:03:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:46.331 11:03:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:46.331 11:03:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:46.331 11:03:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:46.331 11:03:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:46.331 192.168.100.9' 00:08:46.331 11:03:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:46.331 192.168.100.9' 00:08:46.331 11:03:05 -- nvmf/common.sh@445 -- # head -n 1 00:08:46.331 11:03:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:46.331 11:03:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:46.331 192.168.100.9' 00:08:46.331 11:03:05 -- nvmf/common.sh@446 -- # tail -n +2 00:08:46.331 11:03:05 -- nvmf/common.sh@446 -- # head -n 1 00:08:46.331 11:03:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:46.331 11:03:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:46.331 11:03:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:46.331 11:03:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:46.331 11:03:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:46.331 11:03:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:46.331 11:03:05 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:46.331 11:03:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.331 11:03:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.331 11:03:05 -- common/autotest_common.sh@10 -- # set +x 00:08:46.331 11:03:05 -- nvmf/common.sh@469 -- # nvmfpid=1489885 00:08:46.331 11:03:05 -- nvmf/common.sh@470 -- # waitforlisten 1489885 00:08:46.332 11:03:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.332 11:03:05 -- common/autotest_common.sh@829 -- # '[' -z 1489885 ']' 00:08:46.332 11:03:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.332 11:03:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.332 11:03:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.332 11:03:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.332 11:03:06 -- common/autotest_common.sh@10 -- # set +x 00:08:46.332 [2024-12-13 11:03:06.043158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.332 [2024-12-13 11:03:06.043203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.332 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.332 [2024-12-13 11:03:06.095906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.332 [2024-12-13 11:03:06.167239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.332 [2024-12-13 11:03:06.167348] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.332 [2024-12-13 11:03:06.167357] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.332 [2024-12-13 11:03:06.167362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.332 [2024-12-13 11:03:06.167395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.332 [2024-12-13 11:03:06.167492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.332 [2024-12-13 11:03:06.167561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.332 [2024-12-13 11:03:06.167563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.332 11:03:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.332 11:03:06 -- common/autotest_common.sh@862 -- # return 0 00:08:46.332 11:03:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.332 11:03:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.332 11:03:06 -- common/autotest_common.sh@10 -- # set +x 00:08:46.332 11:03:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.332 11:03:06 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:46.332 11:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.332 11:03:06 -- common/autotest_common.sh@10 -- # set +x 00:08:46.591 [2024-12-13 11:03:06.904629] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ad8960/0x1adce50) succeed. 00:08:46.591 [2024-12-13 11:03:06.912752] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ad9f50/0x1b1e4f0) succeed. 00:08:46.591 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.591 11:03:07 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:46.591 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.591 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.591 [2024-12-13 11:03:07.027129] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:46.591 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.591 11:03:07 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:46.591 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.591 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.592 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.592 11:03:07 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:46.592 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.592 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.592 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.592 11:03:07 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:46.592 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.592 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.592 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.592 11:03:07 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.592 11:03:07 -- target/referrals.sh@48 -- # jq length 00:08:46.592 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.592 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.592 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.592 11:03:07 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:46.592 11:03:07 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:46.592 11:03:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.592 11:03:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.592 11:03:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.592 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.592 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.592 11:03:07 -- target/referrals.sh@21 -- # sort 00:08:46.592 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.592 11:03:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.592 11:03:07 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.592 11:03:07 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:46.592 11:03:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.592 11:03:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.592 11:03:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:46.592 11:03:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.592 11:03:07 -- target/referrals.sh@26 -- # sort 00:08:46.851 11:03:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.851 11:03:07 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.851 11:03:07 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:46.851 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.851 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.851 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.851 11:03:07 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:46.851 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.851 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.851 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.851 11:03:07 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:46.851 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.851 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.851 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.851 11:03:07 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.851 11:03:07 -- target/referrals.sh@56 -- # jq length 00:08:46.851 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.851 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.851 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.851 11:03:07 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:46.851 11:03:07 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:46.851 11:03:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.851 11:03:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.851 11:03:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:46.851 11:03:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.851 11:03:07 -- target/referrals.sh@26 -- # sort 00:08:47.110 11:03:07 -- target/referrals.sh@26 -- # echo 00:08:47.110 11:03:07 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:47.110 11:03:07 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:47.110 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.110 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.110 11:03:07 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.110 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.110 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.110 11:03:07 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:47.110 11:03:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.110 11:03:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.110 11:03:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.110 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.110 11:03:07 -- target/referrals.sh@21 -- # sort 00:08:47.110 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.110 11:03:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:47.110 11:03:07 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.110 11:03:07 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:47.110 11:03:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.110 11:03:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.110 11:03:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.110 11:03:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.110 11:03:07 -- target/referrals.sh@26 -- # sort 00:08:47.110 11:03:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:47.110 11:03:07 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.110 11:03:07 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:47.110 11:03:07 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:47.110 11:03:07 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.110 11:03:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.110 11:03:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.110 11:03:07 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.369 11:03:07 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.369 11:03:07 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.369 11:03:07 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:47.369 11:03:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.369 11:03:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.369 11:03:07 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.369 11:03:07 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.369 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.369 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.369 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.369 11:03:07 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:47.369 11:03:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.369 11:03:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.369 11:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.369 11:03:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.369 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.369 11:03:07 -- target/referrals.sh@21 -- # sort 00:08:47.369 11:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.369 11:03:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:47.369 11:03:07 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.369 11:03:07 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:47.369 11:03:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.369 11:03:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.370 11:03:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.370 11:03:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.370 11:03:07 -- target/referrals.sh@26 -- # sort 00:08:47.370 11:03:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:47.370 11:03:07 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.370 11:03:07 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:47.370 11:03:07 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:47.370 11:03:07 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.370 11:03:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.370 11:03:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.629 11:03:08 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:47.629 11:03:08 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.629 11:03:08 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.629 11:03:08 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:47.629 11:03:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.629 11:03:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.629 11:03:08 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.629 11:03:08 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:47.629 11:03:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.629 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:08:47.629 11:03:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.629 11:03:08 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.629 11:03:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.629 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:08:47.629 11:03:08 -- target/referrals.sh@82 -- # jq length 00:08:47.629 11:03:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.629 11:03:08 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:47.629 11:03:08 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:47.629 11:03:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.629 11:03:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.629 11:03:08 -- target/referrals.sh@26 -- # sort 00:08:47.629 11:03:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:47.629 11:03:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.888 11:03:08 -- target/referrals.sh@26 -- # echo 00:08:47.888 11:03:08 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:47.888 11:03:08 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:47.888 11:03:08 -- target/referrals.sh@86 -- # nvmftestfini 00:08:47.888 11:03:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:47.888 11:03:08 -- nvmf/common.sh@116 -- # sync 00:08:47.888 11:03:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:47.888 11:03:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:47.888 11:03:08 -- nvmf/common.sh@119 -- # set +e 00:08:47.888 11:03:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:47.888 11:03:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:47.888 rmmod nvme_rdma 00:08:47.888 rmmod nvme_fabrics 00:08:47.888 11:03:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:47.888 11:03:08 -- nvmf/common.sh@123 -- # set -e 00:08:47.888 11:03:08 -- nvmf/common.sh@124 -- # return 0 00:08:47.888 11:03:08 -- nvmf/common.sh@477 -- # '[' -n 1489885 ']' 00:08:47.888 11:03:08 -- nvmf/common.sh@478 -- # killprocess 1489885 00:08:47.888 11:03:08 -- common/autotest_common.sh@936 -- # '[' -z 1489885 ']' 00:08:47.888 11:03:08 -- common/autotest_common.sh@940 -- # kill -0 1489885 00:08:47.888 11:03:08 -- common/autotest_common.sh@941 -- # uname 00:08:47.888 11:03:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:47.888 11:03:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1489885 00:08:47.888 11:03:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:47.888 11:03:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:47.888 11:03:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1489885' 00:08:47.888 killing process with pid 1489885 00:08:47.888 11:03:08 -- common/autotest_common.sh@955 -- # kill 1489885 00:08:47.888 11:03:08 -- common/autotest_common.sh@960 -- # wait 1489885 00:08:48.148 11:03:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.148 11:03:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:48.148 00:08:48.148 real 0m8.666s 00:08:48.148 user 0m12.237s 00:08:48.148 sys 0m5.199s 00:08:48.148 11:03:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.148 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.148 ************************************ 00:08:48.148 END TEST nvmf_referrals 00:08:48.148 ************************************ 00:08:48.148 11:03:08 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:48.148 11:03:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:48.148 11:03:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.148 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.148 ************************************ 00:08:48.148 START TEST nvmf_connect_disconnect 00:08:48.148 ************************************ 00:08:48.148 11:03:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:48.408 * Looking for test storage... 00:08:48.408 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:48.408 11:03:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:48.408 11:03:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:48.408 11:03:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:48.408 11:03:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:48.408 11:03:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:48.408 11:03:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:48.408 11:03:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:48.408 11:03:08 -- scripts/common.sh@335 -- # IFS=.-: 00:08:48.408 11:03:08 -- scripts/common.sh@335 -- # read -ra ver1 00:08:48.408 11:03:08 -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.408 11:03:08 -- scripts/common.sh@336 -- # read -ra ver2 00:08:48.408 11:03:08 -- scripts/common.sh@337 -- # local 'op=<' 00:08:48.408 11:03:08 -- scripts/common.sh@339 -- # ver1_l=2 00:08:48.408 11:03:08 -- scripts/common.sh@340 -- # ver2_l=1 00:08:48.408 11:03:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:48.408 11:03:08 -- scripts/common.sh@343 -- # case "$op" in 00:08:48.408 11:03:08 -- scripts/common.sh@344 -- # : 1 00:08:48.408 11:03:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:48.408 11:03:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.408 11:03:08 -- scripts/common.sh@364 -- # decimal 1 00:08:48.408 11:03:08 -- scripts/common.sh@352 -- # local d=1 00:08:48.408 11:03:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.408 11:03:08 -- scripts/common.sh@354 -- # echo 1 00:08:48.408 11:03:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:48.408 11:03:08 -- scripts/common.sh@365 -- # decimal 2 00:08:48.408 11:03:08 -- scripts/common.sh@352 -- # local d=2 00:08:48.408 11:03:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.408 11:03:08 -- scripts/common.sh@354 -- # echo 2 00:08:48.408 11:03:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:48.408 11:03:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:48.408 11:03:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:48.408 11:03:08 -- scripts/common.sh@367 -- # return 0 00:08:48.408 11:03:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.408 11:03:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:48.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.408 --rc genhtml_branch_coverage=1 00:08:48.408 --rc genhtml_function_coverage=1 00:08:48.408 --rc genhtml_legend=1 00:08:48.408 --rc geninfo_all_blocks=1 00:08:48.408 --rc geninfo_unexecuted_blocks=1 00:08:48.408 00:08:48.408 ' 00:08:48.408 11:03:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:48.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.408 --rc genhtml_branch_coverage=1 00:08:48.408 --rc genhtml_function_coverage=1 00:08:48.408 --rc genhtml_legend=1 00:08:48.408 --rc geninfo_all_blocks=1 00:08:48.408 --rc geninfo_unexecuted_blocks=1 00:08:48.408 00:08:48.408 ' 00:08:48.408 11:03:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:48.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.408 --rc genhtml_branch_coverage=1 00:08:48.408 --rc genhtml_function_coverage=1 00:08:48.408 --rc genhtml_legend=1 00:08:48.408 --rc geninfo_all_blocks=1 00:08:48.408 --rc geninfo_unexecuted_blocks=1 00:08:48.408 00:08:48.408 ' 00:08:48.408 11:03:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:48.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.408 --rc genhtml_branch_coverage=1 00:08:48.408 --rc genhtml_function_coverage=1 00:08:48.408 --rc genhtml_legend=1 00:08:48.408 --rc geninfo_all_blocks=1 00:08:48.408 --rc geninfo_unexecuted_blocks=1 00:08:48.408 00:08:48.408 ' 00:08:48.408 11:03:08 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.408 11:03:08 -- nvmf/common.sh@7 -- # uname -s 00:08:48.408 11:03:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.408 11:03:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.408 11:03:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.408 11:03:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.408 11:03:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.408 11:03:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.408 11:03:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.408 11:03:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.408 11:03:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.408 11:03:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.408 11:03:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:48.408 11:03:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:48.408 11:03:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.408 11:03:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.408 11:03:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.408 11:03:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:48.408 11:03:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.408 11:03:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.408 11:03:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.408 11:03:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.408 11:03:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.409 11:03:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.409 11:03:08 -- paths/export.sh@5 -- # export PATH 00:08:48.409 11:03:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.409 11:03:08 -- nvmf/common.sh@46 -- # : 0 00:08:48.409 11:03:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.409 11:03:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.409 11:03:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.409 11:03:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.409 11:03:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.409 11:03:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.409 11:03:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.409 11:03:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.409 11:03:08 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.409 11:03:08 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.409 11:03:08 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:48.409 11:03:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:48.409 11:03:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.409 11:03:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:48.409 11:03:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:48.409 11:03:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:48.409 11:03:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.409 11:03:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.409 11:03:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.409 11:03:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:48.409 11:03:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:48.409 11:03:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:48.409 11:03:08 -- common/autotest_common.sh@10 -- # set +x 00:08:54.979 11:03:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.979 11:03:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:54.979 11:03:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:54.979 11:03:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:54.979 11:03:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:54.979 11:03:14 -- nvmf/common.sh@294 -- # net_devs=() 00:08:54.979 11:03:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@295 -- # e810=() 00:08:54.979 11:03:14 -- nvmf/common.sh@295 -- # local -ga e810 00:08:54.979 11:03:14 -- nvmf/common.sh@296 -- # x722=() 00:08:54.979 11:03:14 -- nvmf/common.sh@296 -- # local -ga x722 00:08:54.979 11:03:14 -- nvmf/common.sh@297 -- # mlx=() 00:08:54.979 11:03:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:54.979 11:03:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.979 11:03:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:54.979 11:03:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:54.979 11:03:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:54.979 11:03:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:54.979 11:03:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:54.979 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:54.979 11:03:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.979 11:03:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:54.979 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:54.979 11:03:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.979 11:03:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.979 11:03:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.979 11:03:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:54.979 Found net devices under 0000:18:00.0: mlx_0_0 00:08:54.979 11:03:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.979 11:03:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.979 11:03:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.979 11:03:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:54.979 Found net devices under 0000:18:00.1: mlx_0_1 00:08:54.979 11:03:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.979 11:03:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:54.979 11:03:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:54.979 11:03:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:54.979 11:03:14 -- nvmf/common.sh@57 -- # uname 00:08:54.979 11:03:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:54.979 11:03:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:54.979 11:03:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:54.979 11:03:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:54.979 11:03:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:54.979 11:03:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:54.979 11:03:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:54.979 11:03:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:54.979 11:03:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:54.979 11:03:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:54.979 11:03:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:54.979 11:03:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:54.979 11:03:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.979 11:03:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:54.979 11:03:14 -- nvmf/common.sh@104 -- # continue 2 00:08:54.979 11:03:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:54.979 11:03:14 -- nvmf/common.sh@104 -- # continue 2 00:08:54.979 11:03:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:54.979 11:03:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:54.979 11:03:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:54.979 11:03:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:54.979 11:03:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:54.979 11:03:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:54.979 11:03:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:54.979 11:03:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:54.979 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.979 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:54.979 altname enp24s0f0np0 00:08:54.979 altname ens785f0np0 00:08:54.979 inet 192.168.100.8/24 scope global mlx_0_0 00:08:54.979 valid_lft forever preferred_lft forever 00:08:54.979 11:03:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:54.979 11:03:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:54.979 11:03:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:54.979 11:03:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:54.979 11:03:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:54.979 11:03:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:54.979 11:03:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:54.979 11:03:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:54.979 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.979 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:54.979 altname enp24s0f1np1 00:08:54.979 altname ens785f1np1 00:08:54.979 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.979 valid_lft forever preferred_lft forever 00:08:54.979 11:03:14 -- nvmf/common.sh@410 -- # return 0 00:08:54.979 11:03:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:54.979 11:03:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.979 11:03:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:54.979 11:03:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:54.979 11:03:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:54.979 11:03:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:54.979 11:03:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.979 11:03:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:54.979 11:03:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.979 11:03:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.979 11:03:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:54.979 11:03:14 -- nvmf/common.sh@104 -- # continue 2 00:08:54.979 11:03:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:54.980 11:03:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.980 11:03:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.980 11:03:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.980 11:03:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.980 11:03:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:54.980 11:03:14 -- nvmf/common.sh@104 -- # continue 2 00:08:54.980 11:03:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:54.980 11:03:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:54.980 11:03:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:54.980 11:03:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:54.980 11:03:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:54.980 11:03:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:54.980 11:03:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:54.980 11:03:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:54.980 11:03:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:54.980 11:03:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:54.980 11:03:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:54.980 11:03:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:54.980 11:03:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:54.980 192.168.100.9' 00:08:54.980 11:03:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:54.980 192.168.100.9' 00:08:54.980 11:03:14 -- nvmf/common.sh@445 -- # head -n 1 00:08:54.980 11:03:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:54.980 11:03:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:54.980 192.168.100.9' 00:08:54.980 11:03:14 -- nvmf/common.sh@446 -- # tail -n +2 00:08:54.980 11:03:14 -- nvmf/common.sh@446 -- # head -n 1 00:08:54.980 11:03:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:54.980 11:03:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:54.980 11:03:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:54.980 11:03:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:54.980 11:03:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:54.980 11:03:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:54.980 11:03:14 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:54.980 11:03:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:54.980 11:03:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.980 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:54.980 11:03:14 -- nvmf/common.sh@469 -- # nvmfpid=1493966 00:08:54.980 11:03:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.980 11:03:14 -- nvmf/common.sh@470 -- # waitforlisten 1493966 00:08:54.980 11:03:14 -- common/autotest_common.sh@829 -- # '[' -z 1493966 ']' 00:08:54.980 11:03:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.980 11:03:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.980 11:03:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.980 11:03:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.980 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:54.980 [2024-12-13 11:03:14.866420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:54.980 [2024-12-13 11:03:14.866463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.980 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.980 [2024-12-13 11:03:14.918103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.980 [2024-12-13 11:03:14.993540] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.980 [2024-12-13 11:03:14.993644] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.980 [2024-12-13 11:03:14.993652] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.980 [2024-12-13 11:03:14.993658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.980 [2024-12-13 11:03:14.993697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.980 [2024-12-13 11:03:14.993716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.980 [2024-12-13 11:03:14.993734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.980 [2024-12-13 11:03:14.993735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.239 11:03:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.239 11:03:15 -- common/autotest_common.sh@862 -- # return 0 00:08:55.239 11:03:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:55.239 11:03:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.239 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.239 11:03:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.239 11:03:15 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:55.239 11:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.239 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.239 [2024-12-13 11:03:15.698517] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:55.239 [2024-12-13 11:03:15.716489] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x85f960/0x863e50) succeed. 00:08:55.239 [2024-12-13 11:03:15.724650] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x860f50/0x8a54f0) succeed. 00:08:55.239 11:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:55.498 11:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.498 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 11:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.498 11:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.498 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 11:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.498 11:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.498 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 11:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:55.498 11:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.498 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 [2024-12-13 11:03:15.853646] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:55.498 11:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:55.498 11:03:15 -- target/connect_disconnect.sh@34 -- # set +x 00:08:58.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.715 11:08:28 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:08.715 11:08:28 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:08.715 11:08:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:08.715 11:08:28 -- nvmf/common.sh@116 -- # sync 00:14:08.715 11:08:28 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:08.715 11:08:28 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:08.715 11:08:28 -- nvmf/common.sh@119 -- # set +e 00:14:08.715 11:08:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:08.715 11:08:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:08.715 rmmod nvme_rdma 00:14:08.715 rmmod nvme_fabrics 00:14:08.715 11:08:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:08.715 11:08:28 -- nvmf/common.sh@123 -- # set -e 00:14:08.715 11:08:28 -- nvmf/common.sh@124 -- # return 0 00:14:08.715 11:08:28 -- nvmf/common.sh@477 -- # '[' -n 1493966 ']' 00:14:08.715 11:08:28 -- nvmf/common.sh@478 -- # killprocess 1493966 00:14:08.715 11:08:28 -- common/autotest_common.sh@936 -- # '[' -z 1493966 ']' 00:14:08.715 11:08:28 -- common/autotest_common.sh@940 -- # kill -0 1493966 00:14:08.715 11:08:28 -- common/autotest_common.sh@941 -- # uname 00:14:08.715 11:08:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:08.715 11:08:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1493966 00:14:08.715 11:08:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:08.715 11:08:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:08.715 11:08:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1493966' 00:14:08.715 killing process with pid 1493966 00:14:08.715 11:08:28 -- common/autotest_common.sh@955 -- # kill 1493966 00:14:08.715 11:08:28 -- common/autotest_common.sh@960 -- # wait 1493966 00:14:08.715 11:08:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:08.715 11:08:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:08.715 00:14:08.715 real 5m20.465s 00:14:08.715 user 20m52.829s 00:14:08.715 sys 0m15.366s 00:14:08.715 11:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:08.715 11:08:29 -- common/autotest_common.sh@10 -- # set +x 00:14:08.716 ************************************ 00:14:08.716 END TEST nvmf_connect_disconnect 00:14:08.716 ************************************ 00:14:08.716 11:08:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:08.716 11:08:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:08.716 11:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.716 11:08:29 -- common/autotest_common.sh@10 -- # set +x 00:14:08.716 ************************************ 00:14:08.716 START TEST nvmf_multitarget 00:14:08.716 ************************************ 00:14:08.716 11:08:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:08.716 * Looking for test storage... 00:14:08.716 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:08.716 11:08:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:08.716 11:08:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:08.716 11:08:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:08.976 11:08:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:08.976 11:08:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:08.976 11:08:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:08.976 11:08:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:08.976 11:08:29 -- scripts/common.sh@335 -- # IFS=.-: 00:14:08.976 11:08:29 -- scripts/common.sh@335 -- # read -ra ver1 00:14:08.976 11:08:29 -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.976 11:08:29 -- scripts/common.sh@336 -- # read -ra ver2 00:14:08.976 11:08:29 -- scripts/common.sh@337 -- # local 'op=<' 00:14:08.976 11:08:29 -- scripts/common.sh@339 -- # ver1_l=2 00:14:08.976 11:08:29 -- scripts/common.sh@340 -- # ver2_l=1 00:14:08.976 11:08:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:08.976 11:08:29 -- scripts/common.sh@343 -- # case "$op" in 00:14:08.976 11:08:29 -- scripts/common.sh@344 -- # : 1 00:14:08.976 11:08:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:08.976 11:08:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.976 11:08:29 -- scripts/common.sh@364 -- # decimal 1 00:14:08.976 11:08:29 -- scripts/common.sh@352 -- # local d=1 00:14:08.976 11:08:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.976 11:08:29 -- scripts/common.sh@354 -- # echo 1 00:14:08.976 11:08:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:08.976 11:08:29 -- scripts/common.sh@365 -- # decimal 2 00:14:08.976 11:08:29 -- scripts/common.sh@352 -- # local d=2 00:14:08.976 11:08:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.976 11:08:29 -- scripts/common.sh@354 -- # echo 2 00:14:08.976 11:08:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:08.976 11:08:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:08.976 11:08:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:08.976 11:08:29 -- scripts/common.sh@367 -- # return 0 00:14:08.976 11:08:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.976 11:08:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.976 --rc genhtml_branch_coverage=1 00:14:08.976 --rc genhtml_function_coverage=1 00:14:08.976 --rc genhtml_legend=1 00:14:08.976 --rc geninfo_all_blocks=1 00:14:08.976 --rc geninfo_unexecuted_blocks=1 00:14:08.976 00:14:08.976 ' 00:14:08.976 11:08:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.976 --rc genhtml_branch_coverage=1 00:14:08.976 --rc genhtml_function_coverage=1 00:14:08.976 --rc genhtml_legend=1 00:14:08.976 --rc geninfo_all_blocks=1 00:14:08.976 --rc geninfo_unexecuted_blocks=1 00:14:08.976 00:14:08.976 ' 00:14:08.976 11:08:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.976 --rc genhtml_branch_coverage=1 00:14:08.976 --rc genhtml_function_coverage=1 00:14:08.976 --rc genhtml_legend=1 00:14:08.976 --rc geninfo_all_blocks=1 00:14:08.976 --rc geninfo_unexecuted_blocks=1 00:14:08.976 00:14:08.976 ' 00:14:08.976 11:08:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.976 --rc genhtml_branch_coverage=1 00:14:08.976 --rc genhtml_function_coverage=1 00:14:08.976 --rc genhtml_legend=1 00:14:08.976 --rc geninfo_all_blocks=1 00:14:08.976 --rc geninfo_unexecuted_blocks=1 00:14:08.976 00:14:08.976 ' 00:14:08.976 11:08:29 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.976 11:08:29 -- nvmf/common.sh@7 -- # uname -s 00:14:08.976 11:08:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.976 11:08:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.976 11:08:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.976 11:08:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.976 11:08:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.976 11:08:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.976 11:08:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.976 11:08:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.976 11:08:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.976 11:08:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.976 11:08:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:08.976 11:08:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:08.976 11:08:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.976 11:08:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.976 11:08:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.976 11:08:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:08.976 11:08:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.976 11:08:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.976 11:08:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.976 11:08:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.976 11:08:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.976 11:08:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.976 11:08:29 -- paths/export.sh@5 -- # export PATH 00:14:08.976 11:08:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.976 11:08:29 -- nvmf/common.sh@46 -- # : 0 00:14:08.976 11:08:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:08.976 11:08:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:08.976 11:08:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:08.976 11:08:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.976 11:08:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.976 11:08:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:08.976 11:08:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:08.976 11:08:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:08.976 11:08:29 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:08.976 11:08:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:08.976 11:08:29 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:08.976 11:08:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.976 11:08:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:08.976 11:08:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:08.976 11:08:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:08.976 11:08:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.976 11:08:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.976 11:08:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.976 11:08:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:08.976 11:08:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:08.976 11:08:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:08.976 11:08:29 -- common/autotest_common.sh@10 -- # set +x 00:14:14.251 11:08:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:14.251 11:08:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:14.251 11:08:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:14.251 11:08:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:14.251 11:08:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:14.251 11:08:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:14.251 11:08:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:14.251 11:08:34 -- nvmf/common.sh@294 -- # net_devs=() 00:14:14.251 11:08:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:14.251 11:08:34 -- nvmf/common.sh@295 -- # e810=() 00:14:14.251 11:08:34 -- nvmf/common.sh@295 -- # local -ga e810 00:14:14.251 11:08:34 -- nvmf/common.sh@296 -- # x722=() 00:14:14.251 11:08:34 -- nvmf/common.sh@296 -- # local -ga x722 00:14:14.251 11:08:34 -- nvmf/common.sh@297 -- # mlx=() 00:14:14.251 11:08:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:14.251 11:08:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.251 11:08:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:14.251 11:08:34 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:14.251 11:08:34 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:14.251 11:08:34 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:14.251 11:08:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:14.251 11:08:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:14.251 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:14.251 11:08:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.251 11:08:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:14.251 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:14.251 11:08:34 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.251 11:08:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:14.251 11:08:34 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.251 11:08:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:14.251 11:08:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.251 11:08:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:14.251 Found net devices under 0000:18:00.0: mlx_0_0 00:14:14.251 11:08:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.251 11:08:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.251 11:08:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:14.251 11:08:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.251 11:08:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:14.251 Found net devices under 0000:18:00.1: mlx_0_1 00:14:14.251 11:08:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.251 11:08:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:14.251 11:08:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:14.251 11:08:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:14.251 11:08:34 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:14.251 11:08:34 -- nvmf/common.sh@57 -- # uname 00:14:14.251 11:08:34 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:14.251 11:08:34 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:14.251 11:08:34 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:14.251 11:08:34 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:14.251 11:08:34 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:14.251 11:08:34 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:14.251 11:08:34 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:14.251 11:08:34 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:14.251 11:08:34 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:14.251 11:08:34 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:14.251 11:08:34 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:14.251 11:08:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.251 11:08:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:14.251 11:08:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:14.251 11:08:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.251 11:08:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:14.251 11:08:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.251 11:08:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:14.251 11:08:34 -- nvmf/common.sh@104 -- # continue 2 00:14:14.251 11:08:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.251 11:08:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.252 11:08:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@104 -- # continue 2 00:14:14.252 11:08:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:14.252 11:08:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.252 11:08:34 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:14.252 11:08:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:14.252 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:14.252 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:14.252 altname enp24s0f0np0 00:14:14.252 altname ens785f0np0 00:14:14.252 inet 192.168.100.8/24 scope global mlx_0_0 00:14:14.252 valid_lft forever preferred_lft forever 00:14:14.252 11:08:34 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:14.252 11:08:34 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.252 11:08:34 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:14.252 11:08:34 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:14.252 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:14.252 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:14.252 altname enp24s0f1np1 00:14:14.252 altname ens785f1np1 00:14:14.252 inet 192.168.100.9/24 scope global mlx_0_1 00:14:14.252 valid_lft forever preferred_lft forever 00:14:14.252 11:08:34 -- nvmf/common.sh@410 -- # return 0 00:14:14.252 11:08:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:14.252 11:08:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:14.252 11:08:34 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:14.252 11:08:34 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:14.252 11:08:34 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.252 11:08:34 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:14.252 11:08:34 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:14.252 11:08:34 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.252 11:08:34 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:14.252 11:08:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.252 11:08:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.252 11:08:34 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@104 -- # continue 2 00:14:14.252 11:08:34 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:14.252 11:08:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.252 11:08:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.252 11:08:34 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.252 11:08:34 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@104 -- # continue 2 00:14:14.252 11:08:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:14.252 11:08:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.252 11:08:34 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:14.252 11:08:34 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:14.252 11:08:34 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:14.252 11:08:34 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:14.252 192.168.100.9' 00:14:14.512 11:08:34 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:14.512 192.168.100.9' 00:14:14.512 11:08:34 -- nvmf/common.sh@445 -- # head -n 1 00:14:14.512 11:08:34 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:14.512 11:08:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:14.512 192.168.100.9' 00:14:14.512 11:08:34 -- nvmf/common.sh@446 -- # tail -n +2 00:14:14.512 11:08:34 -- nvmf/common.sh@446 -- # head -n 1 00:14:14.512 11:08:34 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:14.512 11:08:34 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:14.512 11:08:34 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:14.512 11:08:34 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:14.512 11:08:34 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:14.512 11:08:34 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:14.512 11:08:34 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:14.512 11:08:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:14.512 11:08:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.512 11:08:34 -- common/autotest_common.sh@10 -- # set +x 00:14:14.512 11:08:34 -- nvmf/common.sh@469 -- # nvmfpid=1556371 00:14:14.512 11:08:34 -- nvmf/common.sh@470 -- # waitforlisten 1556371 00:14:14.512 11:08:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.512 11:08:34 -- common/autotest_common.sh@829 -- # '[' -z 1556371 ']' 00:14:14.512 11:08:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.512 11:08:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.512 11:08:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.512 11:08:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.512 11:08:34 -- common/autotest_common.sh@10 -- # set +x 00:14:14.512 [2024-12-13 11:08:34.906385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:14.512 [2024-12-13 11:08:34.906434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.512 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.512 [2024-12-13 11:08:34.961242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.512 [2024-12-13 11:08:35.033352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:14.512 [2024-12-13 11:08:35.033454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.512 [2024-12-13 11:08:35.033461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.512 [2024-12-13 11:08:35.033467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.512 [2024-12-13 11:08:35.033501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.512 [2024-12-13 11:08:35.033609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.512 [2024-12-13 11:08:35.033694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.512 [2024-12-13 11:08:35.033695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.449 11:08:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.449 11:08:35 -- common/autotest_common.sh@862 -- # return 0 00:14:15.449 11:08:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:15.449 11:08:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.449 11:08:35 -- common/autotest_common.sh@10 -- # set +x 00:14:15.449 11:08:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.449 11:08:35 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:15.449 11:08:35 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:15.449 11:08:35 -- target/multitarget.sh@21 -- # jq length 00:14:15.449 11:08:35 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:15.449 11:08:35 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:15.449 "nvmf_tgt_1" 00:14:15.449 11:08:35 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:15.707 "nvmf_tgt_2" 00:14:15.707 11:08:36 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:15.707 11:08:36 -- target/multitarget.sh@28 -- # jq length 00:14:15.707 11:08:36 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:15.707 11:08:36 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:15.707 true 00:14:15.707 11:08:36 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:15.966 true 00:14:15.966 11:08:36 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:15.966 11:08:36 -- target/multitarget.sh@35 -- # jq length 00:14:15.966 11:08:36 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:15.966 11:08:36 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:15.966 11:08:36 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:15.966 11:08:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.966 11:08:36 -- nvmf/common.sh@116 -- # sync 00:14:15.966 11:08:36 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:15.966 11:08:36 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:15.966 11:08:36 -- nvmf/common.sh@119 -- # set +e 00:14:15.966 11:08:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.966 11:08:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:15.966 rmmod nvme_rdma 00:14:15.966 rmmod nvme_fabrics 00:14:15.966 11:08:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.966 11:08:36 -- nvmf/common.sh@123 -- # set -e 00:14:15.966 11:08:36 -- nvmf/common.sh@124 -- # return 0 00:14:15.966 11:08:36 -- nvmf/common.sh@477 -- # '[' -n 1556371 ']' 00:14:15.966 11:08:36 -- nvmf/common.sh@478 -- # killprocess 1556371 00:14:15.966 11:08:36 -- common/autotest_common.sh@936 -- # '[' -z 1556371 ']' 00:14:15.966 11:08:36 -- common/autotest_common.sh@940 -- # kill -0 1556371 00:14:15.966 11:08:36 -- common/autotest_common.sh@941 -- # uname 00:14:15.966 11:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.966 11:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1556371 00:14:16.225 11:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:16.225 11:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:16.225 11:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1556371' 00:14:16.225 killing process with pid 1556371 00:14:16.225 11:08:36 -- common/autotest_common.sh@955 -- # kill 1556371 00:14:16.225 11:08:36 -- common/autotest_common.sh@960 -- # wait 1556371 00:14:16.225 11:08:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.225 11:08:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:16.225 00:14:16.225 real 0m7.569s 00:14:16.225 user 0m9.095s 00:14:16.225 sys 0m4.644s 00:14:16.225 11:08:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:16.225 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:14:16.225 ************************************ 00:14:16.225 END TEST nvmf_multitarget 00:14:16.225 ************************************ 00:14:16.225 11:08:36 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:16.225 11:08:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:16.225 11:08:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.225 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:14:16.485 ************************************ 00:14:16.485 START TEST nvmf_rpc 00:14:16.485 ************************************ 00:14:16.485 11:08:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:16.485 * Looking for test storage... 00:14:16.485 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:16.485 11:08:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:16.485 11:08:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:16.485 11:08:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:16.485 11:08:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:16.485 11:08:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:16.485 11:08:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:16.485 11:08:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:16.485 11:08:36 -- scripts/common.sh@335 -- # IFS=.-: 00:14:16.485 11:08:36 -- scripts/common.sh@335 -- # read -ra ver1 00:14:16.485 11:08:36 -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.485 11:08:36 -- scripts/common.sh@336 -- # read -ra ver2 00:14:16.485 11:08:36 -- scripts/common.sh@337 -- # local 'op=<' 00:14:16.485 11:08:36 -- scripts/common.sh@339 -- # ver1_l=2 00:14:16.485 11:08:36 -- scripts/common.sh@340 -- # ver2_l=1 00:14:16.485 11:08:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:16.485 11:08:36 -- scripts/common.sh@343 -- # case "$op" in 00:14:16.485 11:08:36 -- scripts/common.sh@344 -- # : 1 00:14:16.485 11:08:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:16.485 11:08:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.485 11:08:36 -- scripts/common.sh@364 -- # decimal 1 00:14:16.485 11:08:36 -- scripts/common.sh@352 -- # local d=1 00:14:16.485 11:08:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.485 11:08:36 -- scripts/common.sh@354 -- # echo 1 00:14:16.485 11:08:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:16.485 11:08:36 -- scripts/common.sh@365 -- # decimal 2 00:14:16.485 11:08:36 -- scripts/common.sh@352 -- # local d=2 00:14:16.485 11:08:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.485 11:08:36 -- scripts/common.sh@354 -- # echo 2 00:14:16.485 11:08:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:16.485 11:08:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:16.485 11:08:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:16.485 11:08:36 -- scripts/common.sh@367 -- # return 0 00:14:16.485 11:08:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.485 11:08:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:16.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.485 --rc genhtml_branch_coverage=1 00:14:16.485 --rc genhtml_function_coverage=1 00:14:16.485 --rc genhtml_legend=1 00:14:16.485 --rc geninfo_all_blocks=1 00:14:16.485 --rc geninfo_unexecuted_blocks=1 00:14:16.485 00:14:16.485 ' 00:14:16.485 11:08:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:16.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.485 --rc genhtml_branch_coverage=1 00:14:16.485 --rc genhtml_function_coverage=1 00:14:16.485 --rc genhtml_legend=1 00:14:16.485 --rc geninfo_all_blocks=1 00:14:16.485 --rc geninfo_unexecuted_blocks=1 00:14:16.485 00:14:16.485 ' 00:14:16.485 11:08:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:16.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.485 --rc genhtml_branch_coverage=1 00:14:16.485 --rc genhtml_function_coverage=1 00:14:16.485 --rc genhtml_legend=1 00:14:16.485 --rc geninfo_all_blocks=1 00:14:16.485 --rc geninfo_unexecuted_blocks=1 00:14:16.485 00:14:16.485 ' 00:14:16.485 11:08:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:16.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.485 --rc genhtml_branch_coverage=1 00:14:16.485 --rc genhtml_function_coverage=1 00:14:16.485 --rc genhtml_legend=1 00:14:16.485 --rc geninfo_all_blocks=1 00:14:16.485 --rc geninfo_unexecuted_blocks=1 00:14:16.485 00:14:16.485 ' 00:14:16.485 11:08:36 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.485 11:08:36 -- nvmf/common.sh@7 -- # uname -s 00:14:16.485 11:08:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.485 11:08:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.485 11:08:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.485 11:08:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.485 11:08:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.485 11:08:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.485 11:08:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.485 11:08:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.485 11:08:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.486 11:08:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.486 11:08:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:16.486 11:08:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:16.486 11:08:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.486 11:08:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.486 11:08:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.486 11:08:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:16.486 11:08:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.486 11:08:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.486 11:08:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.486 11:08:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.486 11:08:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.486 11:08:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.486 11:08:36 -- paths/export.sh@5 -- # export PATH 00:14:16.486 11:08:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.486 11:08:36 -- nvmf/common.sh@46 -- # : 0 00:14:16.486 11:08:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:16.486 11:08:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:16.486 11:08:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:16.486 11:08:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.486 11:08:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.486 11:08:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:16.486 11:08:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:16.486 11:08:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:16.486 11:08:36 -- target/rpc.sh@11 -- # loops=5 00:14:16.486 11:08:36 -- target/rpc.sh@23 -- # nvmftestinit 00:14:16.486 11:08:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:16.486 11:08:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.486 11:08:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:16.486 11:08:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:16.486 11:08:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:16.486 11:08:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.486 11:08:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.486 11:08:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.486 11:08:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:16.486 11:08:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:16.486 11:08:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:16.486 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:14:21.760 11:08:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.760 11:08:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.760 11:08:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.760 11:08:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.760 11:08:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.760 11:08:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.760 11:08:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.760 11:08:42 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.760 11:08:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.760 11:08:42 -- nvmf/common.sh@295 -- # e810=() 00:14:21.760 11:08:42 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.760 11:08:42 -- nvmf/common.sh@296 -- # x722=() 00:14:21.760 11:08:42 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.760 11:08:42 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.760 11:08:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.760 11:08:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.760 11:08:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.760 11:08:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:21.760 11:08:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:21.760 11:08:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:21.760 11:08:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.760 11:08:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.760 11:08:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:21.760 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:21.760 11:08:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.760 11:08:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.760 11:08:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:21.760 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:21.760 11:08:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:21.760 11:08:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.761 11:08:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.761 11:08:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.761 11:08:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.761 11:08:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.761 11:08:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:21.761 Found net devices under 0000:18:00.0: mlx_0_0 00:14:21.761 11:08:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.761 11:08:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.761 11:08:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.761 11:08:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.761 11:08:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:21.761 Found net devices under 0000:18:00.1: mlx_0_1 00:14:21.761 11:08:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.761 11:08:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.761 11:08:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.761 11:08:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:21.761 11:08:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:21.761 11:08:42 -- nvmf/common.sh@57 -- # uname 00:14:21.761 11:08:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:21.761 11:08:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:21.761 11:08:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:21.761 11:08:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:21.761 11:08:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:21.761 11:08:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:21.761 11:08:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:21.761 11:08:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:21.761 11:08:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:21.761 11:08:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:21.761 11:08:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:21.761 11:08:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.761 11:08:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:21.761 11:08:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:21.761 11:08:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.761 11:08:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:21.761 11:08:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:21.761 11:08:42 -- nvmf/common.sh@104 -- # continue 2 00:14:21.761 11:08:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.761 11:08:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:21.761 11:08:42 -- nvmf/common.sh@104 -- # continue 2 00:14:21.761 11:08:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:21.761 11:08:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:21.761 11:08:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:21.761 11:08:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:21.761 11:08:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:21.761 11:08:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:21.761 11:08:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:21.761 11:08:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:21.761 11:08:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:21.761 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.761 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:21.761 altname enp24s0f0np0 00:14:21.761 altname ens785f0np0 00:14:21.761 inet 192.168.100.8/24 scope global mlx_0_0 00:14:21.761 valid_lft forever preferred_lft forever 00:14:21.761 11:08:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:21.761 11:08:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:21.761 11:08:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:22.020 11:08:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:22.020 11:08:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.020 11:08:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.021 11:08:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:22.021 11:08:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:22.021 11:08:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:22.021 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:22.021 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:22.021 altname enp24s0f1np1 00:14:22.021 altname ens785f1np1 00:14:22.021 inet 192.168.100.9/24 scope global mlx_0_1 00:14:22.021 valid_lft forever preferred_lft forever 00:14:22.021 11:08:42 -- nvmf/common.sh@410 -- # return 0 00:14:22.021 11:08:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:22.021 11:08:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:22.021 11:08:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:22.021 11:08:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:22.021 11:08:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:22.021 11:08:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:22.021 11:08:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:22.021 11:08:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:22.021 11:08:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:22.021 11:08:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:22.021 11:08:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.021 11:08:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.021 11:08:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:22.021 11:08:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:22.021 11:08:42 -- nvmf/common.sh@104 -- # continue 2 00:14:22.021 11:08:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.021 11:08:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.021 11:08:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:22.021 11:08:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.021 11:08:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:22.021 11:08:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:22.021 11:08:42 -- nvmf/common.sh@104 -- # continue 2 00:14:22.021 11:08:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:22.021 11:08:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:22.021 11:08:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:22.021 11:08:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:22.021 11:08:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.021 11:08:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.021 11:08:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:22.021 11:08:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:22.021 11:08:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:22.021 11:08:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:22.021 11:08:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.021 11:08:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.021 11:08:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:22.021 192.168.100.9' 00:14:22.021 11:08:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:22.021 192.168.100.9' 00:14:22.021 11:08:42 -- nvmf/common.sh@445 -- # head -n 1 00:14:22.021 11:08:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:22.021 11:08:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:22.021 192.168.100.9' 00:14:22.021 11:08:42 -- nvmf/common.sh@446 -- # head -n 1 00:14:22.021 11:08:42 -- nvmf/common.sh@446 -- # tail -n +2 00:14:22.021 11:08:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:22.021 11:08:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:22.021 11:08:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:22.021 11:08:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:22.021 11:08:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:22.021 11:08:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:22.021 11:08:42 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:22.021 11:08:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:22.021 11:08:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.021 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.021 11:08:42 -- nvmf/common.sh@469 -- # nvmfpid=1559940 00:14:22.021 11:08:42 -- nvmf/common.sh@470 -- # waitforlisten 1559940 00:14:22.021 11:08:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.021 11:08:42 -- common/autotest_common.sh@829 -- # '[' -z 1559940 ']' 00:14:22.021 11:08:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.021 11:08:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.021 11:08:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.021 11:08:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.021 11:08:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.021 [2024-12-13 11:08:42.489671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:22.021 [2024-12-13 11:08:42.489720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.021 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.021 [2024-12-13 11:08:42.544454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.280 [2024-12-13 11:08:42.614510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:22.280 [2024-12-13 11:08:42.614614] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.280 [2024-12-13 11:08:42.614621] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.280 [2024-12-13 11:08:42.614627] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.280 [2024-12-13 11:08:42.614670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.280 [2024-12-13 11:08:42.614774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.280 [2024-12-13 11:08:42.614840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.280 [2024-12-13 11:08:42.614841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.848 11:08:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.848 11:08:43 -- common/autotest_common.sh@862 -- # return 0 00:14:22.848 11:08:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:22.848 11:08:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.848 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.848 11:08:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.848 11:08:43 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:22.848 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.848 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.848 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.848 11:08:43 -- target/rpc.sh@26 -- # stats='{ 00:14:22.848 "tick_rate": 2700000000, 00:14:22.848 "poll_groups": [ 00:14:22.848 { 00:14:22.848 "name": "nvmf_tgt_poll_group_0", 00:14:22.848 "admin_qpairs": 0, 00:14:22.848 "io_qpairs": 0, 00:14:22.848 "current_admin_qpairs": 0, 00:14:22.848 "current_io_qpairs": 0, 00:14:22.848 "pending_bdev_io": 0, 00:14:22.848 "completed_nvme_io": 0, 00:14:22.848 "transports": [] 00:14:22.848 }, 00:14:22.848 { 00:14:22.848 "name": "nvmf_tgt_poll_group_1", 00:14:22.848 "admin_qpairs": 0, 00:14:22.848 "io_qpairs": 0, 00:14:22.848 "current_admin_qpairs": 0, 00:14:22.848 "current_io_qpairs": 0, 00:14:22.848 "pending_bdev_io": 0, 00:14:22.848 "completed_nvme_io": 0, 00:14:22.848 "transports": [] 00:14:22.848 }, 00:14:22.848 { 00:14:22.848 "name": "nvmf_tgt_poll_group_2", 00:14:22.848 "admin_qpairs": 0, 00:14:22.848 "io_qpairs": 0, 00:14:22.848 "current_admin_qpairs": 0, 00:14:22.848 "current_io_qpairs": 0, 00:14:22.848 "pending_bdev_io": 0, 00:14:22.848 "completed_nvme_io": 0, 00:14:22.848 "transports": [] 00:14:22.848 }, 00:14:22.848 { 00:14:22.848 "name": "nvmf_tgt_poll_group_3", 00:14:22.848 "admin_qpairs": 0, 00:14:22.848 "io_qpairs": 0, 00:14:22.848 "current_admin_qpairs": 0, 00:14:22.848 "current_io_qpairs": 0, 00:14:22.848 "pending_bdev_io": 0, 00:14:22.848 "completed_nvme_io": 0, 00:14:22.848 "transports": [] 00:14:22.848 } 00:14:22.848 ] 00:14:22.848 }' 00:14:22.848 11:08:43 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:22.848 11:08:43 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:22.848 11:08:43 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:22.848 11:08:43 -- target/rpc.sh@15 -- # wc -l 00:14:22.848 11:08:43 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:22.848 11:08:43 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:23.107 11:08:43 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:23.107 11:08:43 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:23.107 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.107 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.107 [2024-12-13 11:08:43.455464] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x598960/0x59ce50) succeed. 00:14:23.107 [2024-12-13 11:08:43.463843] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x599f50/0x5de4f0) succeed. 00:14:23.107 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.107 11:08:43 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:23.107 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.107 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.107 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.107 11:08:43 -- target/rpc.sh@33 -- # stats='{ 00:14:23.107 "tick_rate": 2700000000, 00:14:23.107 "poll_groups": [ 00:14:23.107 { 00:14:23.107 "name": "nvmf_tgt_poll_group_0", 00:14:23.107 "admin_qpairs": 0, 00:14:23.107 "io_qpairs": 0, 00:14:23.107 "current_admin_qpairs": 0, 00:14:23.107 "current_io_qpairs": 0, 00:14:23.107 "pending_bdev_io": 0, 00:14:23.107 "completed_nvme_io": 0, 00:14:23.107 "transports": [ 00:14:23.107 { 00:14:23.107 "trtype": "RDMA", 00:14:23.107 "pending_data_buffer": 0, 00:14:23.107 "devices": [ 00:14:23.107 { 00:14:23.107 "name": "mlx5_0", 00:14:23.107 "polls": 15121, 00:14:23.107 "idle_polls": 15121, 00:14:23.107 "completions": 0, 00:14:23.107 "requests": 0, 00:14:23.107 "request_latency": 0, 00:14:23.107 "pending_free_request": 0, 00:14:23.107 "pending_rdma_read": 0, 00:14:23.107 "pending_rdma_write": 0, 00:14:23.107 "pending_rdma_send": 0, 00:14:23.107 "total_send_wrs": 0, 00:14:23.107 "send_doorbell_updates": 0, 00:14:23.107 "total_recv_wrs": 4096, 00:14:23.107 "recv_doorbell_updates": 1 00:14:23.107 }, 00:14:23.107 { 00:14:23.107 "name": "mlx5_1", 00:14:23.107 "polls": 15121, 00:14:23.107 "idle_polls": 15121, 00:14:23.107 "completions": 0, 00:14:23.107 "requests": 0, 00:14:23.107 "request_latency": 0, 00:14:23.107 "pending_free_request": 0, 00:14:23.107 "pending_rdma_read": 0, 00:14:23.107 "pending_rdma_write": 0, 00:14:23.107 "pending_rdma_send": 0, 00:14:23.107 "total_send_wrs": 0, 00:14:23.107 "send_doorbell_updates": 0, 00:14:23.107 "total_recv_wrs": 4096, 00:14:23.107 "recv_doorbell_updates": 1 00:14:23.107 } 00:14:23.107 ] 00:14:23.107 } 00:14:23.107 ] 00:14:23.107 }, 00:14:23.107 { 00:14:23.107 "name": "nvmf_tgt_poll_group_1", 00:14:23.107 "admin_qpairs": 0, 00:14:23.108 "io_qpairs": 0, 00:14:23.108 "current_admin_qpairs": 0, 00:14:23.108 "current_io_qpairs": 0, 00:14:23.108 "pending_bdev_io": 0, 00:14:23.108 "completed_nvme_io": 0, 00:14:23.108 "transports": [ 00:14:23.108 { 00:14:23.108 "trtype": "RDMA", 00:14:23.108 "pending_data_buffer": 0, 00:14:23.108 "devices": [ 00:14:23.108 { 00:14:23.108 "name": "mlx5_0", 00:14:23.108 "polls": 9819, 00:14:23.108 "idle_polls": 9819, 00:14:23.108 "completions": 0, 00:14:23.108 "requests": 0, 00:14:23.108 "request_latency": 0, 00:14:23.108 "pending_free_request": 0, 00:14:23.108 "pending_rdma_read": 0, 00:14:23.108 "pending_rdma_write": 0, 00:14:23.108 "pending_rdma_send": 0, 00:14:23.108 "total_send_wrs": 0, 00:14:23.108 "send_doorbell_updates": 0, 00:14:23.108 "total_recv_wrs": 4096, 00:14:23.108 "recv_doorbell_updates": 1 00:14:23.108 }, 00:14:23.108 { 00:14:23.108 "name": "mlx5_1", 00:14:23.108 "polls": 9819, 00:14:23.108 "idle_polls": 9819, 00:14:23.108 "completions": 0, 00:14:23.108 "requests": 0, 00:14:23.108 "request_latency": 0, 00:14:23.108 "pending_free_request": 0, 00:14:23.108 "pending_rdma_read": 0, 00:14:23.108 "pending_rdma_write": 0, 00:14:23.108 "pending_rdma_send": 0, 00:14:23.108 "total_send_wrs": 0, 00:14:23.108 "send_doorbell_updates": 0, 00:14:23.108 "total_recv_wrs": 4096, 00:14:23.108 "recv_doorbell_updates": 1 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 }, 00:14:23.108 { 00:14:23.108 "name": "nvmf_tgt_poll_group_2", 00:14:23.108 "admin_qpairs": 0, 00:14:23.108 "io_qpairs": 0, 00:14:23.108 "current_admin_qpairs": 0, 00:14:23.108 "current_io_qpairs": 0, 00:14:23.108 "pending_bdev_io": 0, 00:14:23.108 "completed_nvme_io": 0, 00:14:23.108 "transports": [ 00:14:23.108 { 00:14:23.108 "trtype": "RDMA", 00:14:23.108 "pending_data_buffer": 0, 00:14:23.108 "devices": [ 00:14:23.108 { 00:14:23.108 "name": "mlx5_0", 00:14:23.108 "polls": 5516, 00:14:23.108 "idle_polls": 5516, 00:14:23.108 "completions": 0, 00:14:23.108 "requests": 0, 00:14:23.108 "request_latency": 0, 00:14:23.108 "pending_free_request": 0, 00:14:23.108 "pending_rdma_read": 0, 00:14:23.108 "pending_rdma_write": 0, 00:14:23.108 "pending_rdma_send": 0, 00:14:23.108 "total_send_wrs": 0, 00:14:23.108 "send_doorbell_updates": 0, 00:14:23.108 "total_recv_wrs": 4096, 00:14:23.108 "recv_doorbell_updates": 1 00:14:23.108 }, 00:14:23.108 { 00:14:23.108 "name": "mlx5_1", 00:14:23.108 "polls": 5516, 00:14:23.108 "idle_polls": 5516, 00:14:23.108 "completions": 0, 00:14:23.108 "requests": 0, 00:14:23.108 "request_latency": 0, 00:14:23.108 "pending_free_request": 0, 00:14:23.108 "pending_rdma_read": 0, 00:14:23.108 "pending_rdma_write": 0, 00:14:23.108 "pending_rdma_send": 0, 00:14:23.108 "total_send_wrs": 0, 00:14:23.108 "send_doorbell_updates": 0, 00:14:23.108 "total_recv_wrs": 4096, 00:14:23.108 "recv_doorbell_updates": 1 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 }, 00:14:23.108 { 00:14:23.108 "name": "nvmf_tgt_poll_group_3", 00:14:23.108 "admin_qpairs": 0, 00:14:23.108 "io_qpairs": 0, 00:14:23.108 "current_admin_qpairs": 0, 00:14:23.108 "current_io_qpairs": 0, 00:14:23.108 "pending_bdev_io": 0, 00:14:23.108 "completed_nvme_io": 0, 00:14:23.108 "transports": [ 00:14:23.108 { 00:14:23.108 "trtype": "RDMA", 00:14:23.108 "pending_data_buffer": 0, 00:14:23.108 "devices": [ 00:14:23.108 { 00:14:23.108 "name": "mlx5_0", 00:14:23.108 "polls": 926, 00:14:23.108 "idle_polls": 926, 00:14:23.108 "completions": 0, 00:14:23.108 "requests": 0, 00:14:23.108 "request_latency": 0, 00:14:23.108 "pending_free_request": 0, 00:14:23.108 "pending_rdma_read": 0, 00:14:23.108 "pending_rdma_write": 0, 00:14:23.108 "pending_rdma_send": 0, 00:14:23.108 "total_send_wrs": 0, 00:14:23.108 "send_doorbell_updates": 0, 00:14:23.108 "total_recv_wrs": 4096, 00:14:23.108 "recv_doorbell_updates": 1 00:14:23.108 }, 00:14:23.108 { 00:14:23.108 "name": "mlx5_1", 00:14:23.108 "polls": 926, 00:14:23.108 "idle_polls": 926, 00:14:23.108 "completions": 0, 00:14:23.108 "requests": 0, 00:14:23.108 "request_latency": 0, 00:14:23.108 "pending_free_request": 0, 00:14:23.108 "pending_rdma_read": 0, 00:14:23.108 "pending_rdma_write": 0, 00:14:23.108 "pending_rdma_send": 0, 00:14:23.108 "total_send_wrs": 0, 00:14:23.108 "send_doorbell_updates": 0, 00:14:23.108 "total_recv_wrs": 4096, 00:14:23.108 "recv_doorbell_updates": 1 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 } 00:14:23.108 ] 00:14:23.108 }' 00:14:23.108 11:08:43 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:23.108 11:08:43 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:23.108 11:08:43 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:23.108 11:08:43 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:23.108 11:08:43 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:23.108 11:08:43 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:23.108 11:08:43 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:23.108 11:08:43 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:23.108 11:08:43 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:23.368 11:08:43 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:23.368 11:08:43 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:23.368 11:08:43 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:23.368 11:08:43 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:23.368 11:08:43 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:23.368 11:08:43 -- target/rpc.sh@15 -- # wc -l 00:14:23.368 11:08:43 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:23.368 11:08:43 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:23.368 11:08:43 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:23.368 11:08:43 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:23.368 11:08:43 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:23.368 11:08:43 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:23.368 11:08:43 -- target/rpc.sh@15 -- # wc -l 00:14:23.368 11:08:43 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:23.368 11:08:43 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:23.368 11:08:43 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:23.368 11:08:43 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:23.368 11:08:43 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:23.368 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.368 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 Malloc1 00:14:23.368 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.368 11:08:43 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.368 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.368 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.368 11:08:43 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.368 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.368 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.368 11:08:43 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:23.368 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.368 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.368 11:08:43 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:23.368 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.368 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.368 [2024-12-13 11:08:43.855196] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:23.368 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.368 11:08:43 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:23.368 11:08:43 -- common/autotest_common.sh@650 -- # local es=0 00:14:23.368 11:08:43 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:23.368 11:08:43 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:23.368 11:08:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.368 11:08:43 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:23.368 11:08:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.368 11:08:43 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:23.368 11:08:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.368 11:08:43 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:23.368 11:08:43 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:23.368 11:08:43 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:23.368 [2024-12-13 11:08:43.894760] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:14:23.368 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:23.368 could not add new controller: failed to write to nvme-fabrics device 00:14:23.368 11:08:43 -- common/autotest_common.sh@653 -- # es=1 00:14:23.368 11:08:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.368 11:08:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.368 11:08:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.368 11:08:43 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:23.368 11:08:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.368 11:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.627 11:08:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.627 11:08:43 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:24.563 11:08:44 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.563 11:08:44 -- common/autotest_common.sh@1187 -- # local i=0 00:14:24.563 11:08:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.563 11:08:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:24.563 11:08:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:26.467 11:08:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:26.467 11:08:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:26.467 11:08:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.467 11:08:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:26.467 11:08:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.467 11:08:46 -- common/autotest_common.sh@1197 -- # return 0 00:14:26.467 11:08:46 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.403 11:08:47 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.404 11:08:47 -- common/autotest_common.sh@1208 -- # local i=0 00:14:27.404 11:08:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:27.404 11:08:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.404 11:08:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:27.404 11:08:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.404 11:08:47 -- common/autotest_common.sh@1220 -- # return 0 00:14:27.404 11:08:47 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:27.404 11:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.404 11:08:47 -- common/autotest_common.sh@10 -- # set +x 00:14:27.404 11:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.404 11:08:47 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:27.404 11:08:47 -- common/autotest_common.sh@650 -- # local es=0 00:14:27.404 11:08:47 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:27.404 11:08:47 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:27.404 11:08:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.404 11:08:47 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:27.404 11:08:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.404 11:08:47 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:27.404 11:08:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.404 11:08:47 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:27.404 11:08:47 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:27.404 11:08:47 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:27.404 [2024-12-13 11:08:47.936389] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:14:27.662 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:27.662 could not add new controller: failed to write to nvme-fabrics device 00:14:27.662 11:08:47 -- common/autotest_common.sh@653 -- # es=1 00:14:27.663 11:08:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.663 11:08:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.663 11:08:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.663 11:08:47 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:27.663 11:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.663 11:08:47 -- common/autotest_common.sh@10 -- # set +x 00:14:27.663 11:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.663 11:08:47 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:28.599 11:08:48 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.599 11:08:48 -- common/autotest_common.sh@1187 -- # local i=0 00:14:28.599 11:08:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.599 11:08:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:28.599 11:08:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:30.503 11:08:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:30.503 11:08:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:30.503 11:08:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.503 11:08:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:30.503 11:08:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.503 11:08:50 -- common/autotest_common.sh@1197 -- # return 0 00:14:30.503 11:08:50 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.440 11:08:51 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.440 11:08:51 -- common/autotest_common.sh@1208 -- # local i=0 00:14:31.440 11:08:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:31.440 11:08:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.440 11:08:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:31.440 11:08:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.440 11:08:51 -- common/autotest_common.sh@1220 -- # return 0 00:14:31.440 11:08:51 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.440 11:08:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.440 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:31.440 11:08:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.440 11:08:51 -- target/rpc.sh@81 -- # seq 1 5 00:14:31.440 11:08:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:31.440 11:08:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.440 11:08:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.440 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:31.440 11:08:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.440 11:08:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.440 11:08:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.440 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:31.440 [2024-12-13 11:08:51.992390] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.440 11:08:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.440 11:08:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:31.440 11:08:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.440 11:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:31.440 11:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.440 11:08:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.440 11:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.440 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:31.699 11:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.699 11:08:52 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:32.705 11:08:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.705 11:08:53 -- common/autotest_common.sh@1187 -- # local i=0 00:14:32.705 11:08:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.705 11:08:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:32.705 11:08:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:34.651 11:08:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:34.651 11:08:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:34.651 11:08:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.651 11:08:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:34.651 11:08:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.651 11:08:55 -- common/autotest_common.sh@1197 -- # return 0 00:14:34.651 11:08:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.592 11:08:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.592 11:08:55 -- common/autotest_common.sh@1208 -- # local i=0 00:14:35.592 11:08:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:35.592 11:08:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.592 11:08:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:35.592 11:08:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.592 11:08:55 -- common/autotest_common.sh@1220 -- # return 0 00:14:35.592 11:08:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:35.592 11:08:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.592 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 11:08:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.592 11:08:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.592 11:08:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.592 11:08:55 -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 11:08:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.592 11:08:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:35.592 11:08:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:35.593 11:08:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.593 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.593 11:08:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.593 11:08:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:35.593 11:08:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.593 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.593 [2024-12-13 11:08:56.013435] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:35.593 11:08:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.593 11:08:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:35.593 11:08:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.593 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.593 11:08:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.593 11:08:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:35.593 11:08:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.593 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.593 11:08:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.593 11:08:56 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.529 11:08:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.529 11:08:57 -- common/autotest_common.sh@1187 -- # local i=0 00:14:36.529 11:08:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.529 11:08:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:36.529 11:08:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:39.062 11:08:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:39.062 11:08:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:39.062 11:08:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.062 11:08:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:39.062 11:08:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.062 11:08:59 -- common/autotest_common.sh@1197 -- # return 0 00:14:39.062 11:08:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.630 11:08:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.630 11:08:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:39.630 11:08:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:39.630 11:08:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.630 11:08:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:39.630 11:08:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.630 11:08:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:39.630 11:08:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.630 11:08:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.630 11:08:59 -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 11:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.630 11:09:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.630 11:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.630 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 11:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.630 11:09:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:39.630 11:09:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.630 11:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.630 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 11:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.630 11:09:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.630 11:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.630 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 [2024-12-13 11:09:00.029295] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.630 11:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.630 11:09:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:39.630 11:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.630 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 11:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.630 11:09:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.630 11:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.630 11:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 11:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.630 11:09:00 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:40.567 11:09:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.567 11:09:01 -- common/autotest_common.sh@1187 -- # local i=0 00:14:40.567 11:09:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.567 11:09:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:40.567 11:09:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:42.471 11:09:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:42.471 11:09:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:42.471 11:09:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.730 11:09:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:42.730 11:09:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.730 11:09:03 -- common/autotest_common.sh@1197 -- # return 0 00:14:42.730 11:09:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.667 11:09:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.667 11:09:03 -- common/autotest_common.sh@1208 -- # local i=0 00:14:43.667 11:09:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:43.667 11:09:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.667 11:09:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:43.667 11:09:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.667 11:09:04 -- common/autotest_common.sh@1220 -- # return 0 00:14:43.667 11:09:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.667 11:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.667 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:43.667 11:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.667 11:09:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.667 11:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.667 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:43.667 11:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.667 11:09:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:43.667 11:09:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.667 11:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.667 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:43.667 11:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.667 11:09:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:43.667 11:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.667 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:43.667 [2024-12-13 11:09:04.049789] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:43.667 11:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.667 11:09:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:43.667 11:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.667 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:43.667 11:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.667 11:09:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.667 11:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.667 11:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:43.667 11:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.667 11:09:04 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:44.606 11:09:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.606 11:09:05 -- common/autotest_common.sh@1187 -- # local i=0 00:14:44.606 11:09:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.606 11:09:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:44.606 11:09:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:46.510 11:09:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:46.510 11:09:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:46.510 11:09:07 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.510 11:09:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:46.510 11:09:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.510 11:09:07 -- common/autotest_common.sh@1197 -- # return 0 00:14:46.510 11:09:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.445 11:09:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.445 11:09:07 -- common/autotest_common.sh@1208 -- # local i=0 00:14:47.445 11:09:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:47.445 11:09:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.445 11:09:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:47.445 11:09:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.704 11:09:08 -- common/autotest_common.sh@1220 -- # return 0 00:14:47.705 11:09:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.705 11:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.705 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 11:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.705 11:09:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.705 11:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.705 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 11:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.705 11:09:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:47.705 11:09:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.705 11:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.705 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 11:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.705 11:09:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:47.705 11:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.705 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 [2024-12-13 11:09:08.051839] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:47.705 11:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.705 11:09:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:47.705 11:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.705 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 11:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.705 11:09:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.705 11:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.705 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 11:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.705 11:09:08 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:48.642 11:09:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.642 11:09:09 -- common/autotest_common.sh@1187 -- # local i=0 00:14:48.642 11:09:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.642 11:09:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:48.642 11:09:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:50.546 11:09:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:50.546 11:09:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:50.546 11:09:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.546 11:09:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:50.546 11:09:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.546 11:09:11 -- common/autotest_common.sh@1197 -- # return 0 00:14:50.546 11:09:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.483 11:09:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.483 11:09:12 -- common/autotest_common.sh@1208 -- # local i=0 00:14:51.483 11:09:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:51.483 11:09:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.483 11:09:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:51.483 11:09:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.483 11:09:12 -- common/autotest_common.sh@1220 -- # return 0 00:14:51.483 11:09:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.483 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.483 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@99 -- # seq 1 5 00:14:51.743 11:09:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.743 11:09:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 [2024-12-13 11:09:12.081908] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.743 11:09:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 [2024-12-13 11:09:12.130072] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.743 11:09:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 [2024-12-13 11:09:12.178255] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.743 11:09:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 [2024-12-13 11:09:12.226432] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 11:09:12 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.743 11:09:12 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.743 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.744 11:09:12 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.744 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.744 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 [2024-12-13 11:09:12.274576] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.744 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.744 11:09:12 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.744 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.744 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.744 11:09:12 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.744 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.744 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.744 11:09:12 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.744 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.744 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.744 11:09:12 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.744 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.744 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.003 11:09:12 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:52.003 11:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.003 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:52.003 11:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.003 11:09:12 -- target/rpc.sh@110 -- # stats='{ 00:14:52.003 "tick_rate": 2700000000, 00:14:52.003 "poll_groups": [ 00:14:52.003 { 00:14:52.003 "name": "nvmf_tgt_poll_group_0", 00:14:52.003 "admin_qpairs": 2, 00:14:52.003 "io_qpairs": 27, 00:14:52.003 "current_admin_qpairs": 0, 00:14:52.003 "current_io_qpairs": 0, 00:14:52.003 "pending_bdev_io": 0, 00:14:52.003 "completed_nvme_io": 127, 00:14:52.003 "transports": [ 00:14:52.003 { 00:14:52.003 "trtype": "RDMA", 00:14:52.003 "pending_data_buffer": 0, 00:14:52.003 "devices": [ 00:14:52.003 { 00:14:52.003 "name": "mlx5_0", 00:14:52.003 "polls": 3514573, 00:14:52.003 "idle_polls": 3514248, 00:14:52.003 "completions": 367, 00:14:52.003 "requests": 183, 00:14:52.003 "request_latency": 36548934, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 309, 00:14:52.003 "send_doorbell_updates": 158, 00:14:52.003 "total_recv_wrs": 4279, 00:14:52.003 "recv_doorbell_updates": 158 00:14:52.003 }, 00:14:52.003 { 00:14:52.003 "name": "mlx5_1", 00:14:52.003 "polls": 3514573, 00:14:52.003 "idle_polls": 3514573, 00:14:52.003 "completions": 0, 00:14:52.003 "requests": 0, 00:14:52.003 "request_latency": 0, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 0, 00:14:52.003 "send_doorbell_updates": 0, 00:14:52.003 "total_recv_wrs": 4096, 00:14:52.003 "recv_doorbell_updates": 1 00:14:52.003 } 00:14:52.003 ] 00:14:52.003 } 00:14:52.003 ] 00:14:52.003 }, 00:14:52.003 { 00:14:52.003 "name": "nvmf_tgt_poll_group_1", 00:14:52.003 "admin_qpairs": 2, 00:14:52.003 "io_qpairs": 26, 00:14:52.003 "current_admin_qpairs": 0, 00:14:52.003 "current_io_qpairs": 0, 00:14:52.003 "pending_bdev_io": 0, 00:14:52.003 "completed_nvme_io": 125, 00:14:52.003 "transports": [ 00:14:52.003 { 00:14:52.003 "trtype": "RDMA", 00:14:52.003 "pending_data_buffer": 0, 00:14:52.003 "devices": [ 00:14:52.003 { 00:14:52.003 "name": "mlx5_0", 00:14:52.003 "polls": 3457087, 00:14:52.003 "idle_polls": 3456765, 00:14:52.003 "completions": 362, 00:14:52.003 "requests": 181, 00:14:52.003 "request_latency": 37124808, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 306, 00:14:52.003 "send_doorbell_updates": 157, 00:14:52.003 "total_recv_wrs": 4277, 00:14:52.003 "recv_doorbell_updates": 158 00:14:52.003 }, 00:14:52.003 { 00:14:52.003 "name": "mlx5_1", 00:14:52.003 "polls": 3457087, 00:14:52.003 "idle_polls": 3457087, 00:14:52.003 "completions": 0, 00:14:52.003 "requests": 0, 00:14:52.003 "request_latency": 0, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 0, 00:14:52.003 "send_doorbell_updates": 0, 00:14:52.003 "total_recv_wrs": 4096, 00:14:52.003 "recv_doorbell_updates": 1 00:14:52.003 } 00:14:52.003 ] 00:14:52.003 } 00:14:52.003 ] 00:14:52.003 }, 00:14:52.003 { 00:14:52.003 "name": "nvmf_tgt_poll_group_2", 00:14:52.003 "admin_qpairs": 1, 00:14:52.003 "io_qpairs": 26, 00:14:52.003 "current_admin_qpairs": 0, 00:14:52.003 "current_io_qpairs": 0, 00:14:52.003 "pending_bdev_io": 0, 00:14:52.003 "completed_nvme_io": 126, 00:14:52.003 "transports": [ 00:14:52.003 { 00:14:52.003 "trtype": "RDMA", 00:14:52.003 "pending_data_buffer": 0, 00:14:52.003 "devices": [ 00:14:52.003 { 00:14:52.003 "name": "mlx5_0", 00:14:52.003 "polls": 3553495, 00:14:52.003 "idle_polls": 3553227, 00:14:52.003 "completions": 309, 00:14:52.003 "requests": 154, 00:14:52.003 "request_latency": 34232226, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 268, 00:14:52.003 "send_doorbell_updates": 130, 00:14:52.003 "total_recv_wrs": 4250, 00:14:52.003 "recv_doorbell_updates": 130 00:14:52.003 }, 00:14:52.003 { 00:14:52.003 "name": "mlx5_1", 00:14:52.003 "polls": 3553495, 00:14:52.003 "idle_polls": 3553495, 00:14:52.003 "completions": 0, 00:14:52.003 "requests": 0, 00:14:52.003 "request_latency": 0, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 0, 00:14:52.003 "send_doorbell_updates": 0, 00:14:52.003 "total_recv_wrs": 4096, 00:14:52.003 "recv_doorbell_updates": 1 00:14:52.003 } 00:14:52.003 ] 00:14:52.003 } 00:14:52.003 ] 00:14:52.003 }, 00:14:52.003 { 00:14:52.003 "name": "nvmf_tgt_poll_group_3", 00:14:52.003 "admin_qpairs": 2, 00:14:52.003 "io_qpairs": 26, 00:14:52.003 "current_admin_qpairs": 0, 00:14:52.003 "current_io_qpairs": 0, 00:14:52.003 "pending_bdev_io": 0, 00:14:52.003 "completed_nvme_io": 77, 00:14:52.003 "transports": [ 00:14:52.003 { 00:14:52.003 "trtype": "RDMA", 00:14:52.003 "pending_data_buffer": 0, 00:14:52.003 "devices": [ 00:14:52.003 { 00:14:52.003 "name": "mlx5_0", 00:14:52.003 "polls": 2718335, 00:14:52.003 "idle_polls": 2718103, 00:14:52.003 "completions": 258, 00:14:52.003 "requests": 129, 00:14:52.003 "request_latency": 23664244, 00:14:52.003 "pending_free_request": 0, 00:14:52.003 "pending_rdma_read": 0, 00:14:52.003 "pending_rdma_write": 0, 00:14:52.003 "pending_rdma_send": 0, 00:14:52.003 "total_send_wrs": 204, 00:14:52.003 "send_doorbell_updates": 117, 00:14:52.004 "total_recv_wrs": 4225, 00:14:52.004 "recv_doorbell_updates": 118 00:14:52.004 }, 00:14:52.004 { 00:14:52.004 "name": "mlx5_1", 00:14:52.004 "polls": 2718335, 00:14:52.004 "idle_polls": 2718335, 00:14:52.004 "completions": 0, 00:14:52.004 "requests": 0, 00:14:52.004 "request_latency": 0, 00:14:52.004 "pending_free_request": 0, 00:14:52.004 "pending_rdma_read": 0, 00:14:52.004 "pending_rdma_write": 0, 00:14:52.004 "pending_rdma_send": 0, 00:14:52.004 "total_send_wrs": 0, 00:14:52.004 "send_doorbell_updates": 0, 00:14:52.004 "total_recv_wrs": 4096, 00:14:52.004 "recv_doorbell_updates": 1 00:14:52.004 } 00:14:52.004 ] 00:14:52.004 } 00:14:52.004 ] 00:14:52.004 } 00:14:52.004 ] 00:14:52.004 }' 00:14:52.004 11:09:12 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:52.004 11:09:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.004 11:09:12 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:52.004 11:09:12 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:52.004 11:09:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.004 11:09:12 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:52.004 11:09:12 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:52.004 11:09:12 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:52.004 11:09:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.004 11:09:12 -- target/rpc.sh@117 -- # (( 1296 > 0 )) 00:14:52.004 11:09:12 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:52.004 11:09:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:52.004 11:09:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.004 11:09:12 -- target/rpc.sh@118 -- # (( 131570212 > 0 )) 00:14:52.004 11:09:12 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:52.004 11:09:12 -- target/rpc.sh@123 -- # nvmftestfini 00:14:52.004 11:09:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.004 11:09:12 -- nvmf/common.sh@116 -- # sync 00:14:52.004 11:09:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:52.004 11:09:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:52.004 11:09:12 -- nvmf/common.sh@119 -- # set +e 00:14:52.004 11:09:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.004 11:09:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:52.004 rmmod nvme_rdma 00:14:52.004 rmmod nvme_fabrics 00:14:52.004 11:09:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.004 11:09:12 -- nvmf/common.sh@123 -- # set -e 00:14:52.004 11:09:12 -- nvmf/common.sh@124 -- # return 0 00:14:52.004 11:09:12 -- nvmf/common.sh@477 -- # '[' -n 1559940 ']' 00:14:52.004 11:09:12 -- nvmf/common.sh@478 -- # killprocess 1559940 00:14:52.004 11:09:12 -- common/autotest_common.sh@936 -- # '[' -z 1559940 ']' 00:14:52.004 11:09:12 -- common/autotest_common.sh@940 -- # kill -0 1559940 00:14:52.004 11:09:12 -- common/autotest_common.sh@941 -- # uname 00:14:52.263 11:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.263 11:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1559940 00:14:52.263 11:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.263 11:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.263 11:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1559940' 00:14:52.263 killing process with pid 1559940 00:14:52.263 11:09:12 -- common/autotest_common.sh@955 -- # kill 1559940 00:14:52.263 11:09:12 -- common/autotest_common.sh@960 -- # wait 1559940 00:14:52.522 11:09:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.522 11:09:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:52.522 00:14:52.522 real 0m36.115s 00:14:52.522 user 2m2.513s 00:14:52.522 sys 0m5.633s 00:14:52.522 11:09:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.522 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:52.522 ************************************ 00:14:52.522 END TEST nvmf_rpc 00:14:52.522 ************************************ 00:14:52.522 11:09:12 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:52.522 11:09:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.522 11:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.522 11:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:52.522 ************************************ 00:14:52.522 START TEST nvmf_invalid 00:14:52.522 ************************************ 00:14:52.522 11:09:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:52.522 * Looking for test storage... 00:14:52.522 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:52.522 11:09:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.522 11:09:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.522 11:09:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:52.781 11:09:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:52.781 11:09:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:52.781 11:09:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:52.781 11:09:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:52.781 11:09:13 -- scripts/common.sh@335 -- # IFS=.-: 00:14:52.781 11:09:13 -- scripts/common.sh@335 -- # read -ra ver1 00:14:52.781 11:09:13 -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.781 11:09:13 -- scripts/common.sh@336 -- # read -ra ver2 00:14:52.781 11:09:13 -- scripts/common.sh@337 -- # local 'op=<' 00:14:52.781 11:09:13 -- scripts/common.sh@339 -- # ver1_l=2 00:14:52.781 11:09:13 -- scripts/common.sh@340 -- # ver2_l=1 00:14:52.781 11:09:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:52.781 11:09:13 -- scripts/common.sh@343 -- # case "$op" in 00:14:52.781 11:09:13 -- scripts/common.sh@344 -- # : 1 00:14:52.781 11:09:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:52.781 11:09:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.781 11:09:13 -- scripts/common.sh@364 -- # decimal 1 00:14:52.781 11:09:13 -- scripts/common.sh@352 -- # local d=1 00:14:52.781 11:09:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.781 11:09:13 -- scripts/common.sh@354 -- # echo 1 00:14:52.781 11:09:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:52.781 11:09:13 -- scripts/common.sh@365 -- # decimal 2 00:14:52.781 11:09:13 -- scripts/common.sh@352 -- # local d=2 00:14:52.781 11:09:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.781 11:09:13 -- scripts/common.sh@354 -- # echo 2 00:14:52.781 11:09:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:52.781 11:09:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:52.781 11:09:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:52.781 11:09:13 -- scripts/common.sh@367 -- # return 0 00:14:52.781 11:09:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.781 11:09:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:52.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.781 --rc genhtml_branch_coverage=1 00:14:52.781 --rc genhtml_function_coverage=1 00:14:52.781 --rc genhtml_legend=1 00:14:52.781 --rc geninfo_all_blocks=1 00:14:52.781 --rc geninfo_unexecuted_blocks=1 00:14:52.781 00:14:52.781 ' 00:14:52.781 11:09:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:52.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.781 --rc genhtml_branch_coverage=1 00:14:52.781 --rc genhtml_function_coverage=1 00:14:52.781 --rc genhtml_legend=1 00:14:52.781 --rc geninfo_all_blocks=1 00:14:52.781 --rc geninfo_unexecuted_blocks=1 00:14:52.781 00:14:52.781 ' 00:14:52.781 11:09:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:52.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.781 --rc genhtml_branch_coverage=1 00:14:52.781 --rc genhtml_function_coverage=1 00:14:52.781 --rc genhtml_legend=1 00:14:52.782 --rc geninfo_all_blocks=1 00:14:52.782 --rc geninfo_unexecuted_blocks=1 00:14:52.782 00:14:52.782 ' 00:14:52.782 11:09:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:52.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.782 --rc genhtml_branch_coverage=1 00:14:52.782 --rc genhtml_function_coverage=1 00:14:52.782 --rc genhtml_legend=1 00:14:52.782 --rc geninfo_all_blocks=1 00:14:52.782 --rc geninfo_unexecuted_blocks=1 00:14:52.782 00:14:52.782 ' 00:14:52.782 11:09:13 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.782 11:09:13 -- nvmf/common.sh@7 -- # uname -s 00:14:52.782 11:09:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.782 11:09:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.782 11:09:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.782 11:09:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.782 11:09:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.782 11:09:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.782 11:09:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.782 11:09:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.782 11:09:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.782 11:09:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.782 11:09:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:52.782 11:09:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:52.782 11:09:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.782 11:09:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.782 11:09:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.782 11:09:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:52.782 11:09:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.782 11:09:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.782 11:09:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.782 11:09:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.782 11:09:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.782 11:09:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.782 11:09:13 -- paths/export.sh@5 -- # export PATH 00:14:52.782 11:09:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.782 11:09:13 -- nvmf/common.sh@46 -- # : 0 00:14:52.782 11:09:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.782 11:09:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.782 11:09:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.782 11:09:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.782 11:09:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.782 11:09:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.782 11:09:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.782 11:09:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.782 11:09:13 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:52.782 11:09:13 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:52.782 11:09:13 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:52.782 11:09:13 -- target/invalid.sh@14 -- # target=foobar 00:14:52.782 11:09:13 -- target/invalid.sh@16 -- # RANDOM=0 00:14:52.782 11:09:13 -- target/invalid.sh@34 -- # nvmftestinit 00:14:52.782 11:09:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:52.782 11:09:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.782 11:09:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.782 11:09:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.782 11:09:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.782 11:09:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.782 11:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.782 11:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.782 11:09:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:52.782 11:09:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:52.782 11:09:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:52.782 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.061 11:09:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:58.061 11:09:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:58.061 11:09:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:58.061 11:09:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:58.061 11:09:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:58.061 11:09:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:58.061 11:09:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:58.061 11:09:18 -- nvmf/common.sh@294 -- # net_devs=() 00:14:58.062 11:09:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:58.062 11:09:18 -- nvmf/common.sh@295 -- # e810=() 00:14:58.062 11:09:18 -- nvmf/common.sh@295 -- # local -ga e810 00:14:58.062 11:09:18 -- nvmf/common.sh@296 -- # x722=() 00:14:58.062 11:09:18 -- nvmf/common.sh@296 -- # local -ga x722 00:14:58.062 11:09:18 -- nvmf/common.sh@297 -- # mlx=() 00:14:58.062 11:09:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:58.062 11:09:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.062 11:09:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:58.062 11:09:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:58.062 11:09:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:58.062 11:09:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:58.062 11:09:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:58.062 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:58.062 11:09:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:58.062 11:09:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:58.062 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:58.062 11:09:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:58.062 11:09:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.062 11:09:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.062 11:09:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:58.062 Found net devices under 0000:18:00.0: mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.062 11:09:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.062 11:09:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.062 11:09:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:58.062 Found net devices under 0000:18:00.1: mlx_0_1 00:14:58.062 11:09:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.062 11:09:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:58.062 11:09:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:58.062 11:09:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:58.062 11:09:18 -- nvmf/common.sh@57 -- # uname 00:14:58.062 11:09:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:58.062 11:09:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:58.062 11:09:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:58.062 11:09:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:58.062 11:09:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:58.062 11:09:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:58.062 11:09:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:58.062 11:09:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:58.062 11:09:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:58.062 11:09:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:58.062 11:09:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:58.062 11:09:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:58.062 11:09:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:58.062 11:09:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:58.062 11:09:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:58.062 11:09:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@104 -- # continue 2 00:14:58.062 11:09:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:58.062 11:09:18 -- nvmf/common.sh@104 -- # continue 2 00:14:58.062 11:09:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:58.062 11:09:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.062 11:09:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:58.062 11:09:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:58.062 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:58.062 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:58.062 altname enp24s0f0np0 00:14:58.062 altname ens785f0np0 00:14:58.062 inet 192.168.100.8/24 scope global mlx_0_0 00:14:58.062 valid_lft forever preferred_lft forever 00:14:58.062 11:09:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:58.062 11:09:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:58.062 11:09:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.062 11:09:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:58.062 11:09:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:58.062 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:58.062 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:58.062 altname enp24s0f1np1 00:14:58.062 altname ens785f1np1 00:14:58.062 inet 192.168.100.9/24 scope global mlx_0_1 00:14:58.062 valid_lft forever preferred_lft forever 00:14:58.062 11:09:18 -- nvmf/common.sh@410 -- # return 0 00:14:58.062 11:09:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.062 11:09:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:58.062 11:09:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:58.062 11:09:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:58.062 11:09:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:58.062 11:09:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:58.062 11:09:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:58.062 11:09:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:58.062 11:09:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:58.062 11:09:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@104 -- # continue 2 00:14:58.062 11:09:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.062 11:09:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:58.062 11:09:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:58.062 11:09:18 -- nvmf/common.sh@104 -- # continue 2 00:14:58.062 11:09:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:58.062 11:09:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.062 11:09:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.062 11:09:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:58.062 11:09:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:58.063 11:09:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:58.063 11:09:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:58.063 11:09:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:58.063 11:09:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:58.063 11:09:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:58.063 192.168.100.9' 00:14:58.063 11:09:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:58.063 192.168.100.9' 00:14:58.063 11:09:18 -- nvmf/common.sh@445 -- # head -n 1 00:14:58.063 11:09:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:58.063 11:09:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:58.063 192.168.100.9' 00:14:58.063 11:09:18 -- nvmf/common.sh@446 -- # tail -n +2 00:14:58.063 11:09:18 -- nvmf/common.sh@446 -- # head -n 1 00:14:58.063 11:09:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:58.063 11:09:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:58.063 11:09:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:58.063 11:09:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:58.063 11:09:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:58.063 11:09:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:58.322 11:09:18 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:58.322 11:09:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.322 11:09:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.322 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:58.322 11:09:18 -- nvmf/common.sh@469 -- # nvmfpid=1569395 00:14:58.322 11:09:18 -- nvmf/common.sh@470 -- # waitforlisten 1569395 00:14:58.322 11:09:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.322 11:09:18 -- common/autotest_common.sh@829 -- # '[' -z 1569395 ']' 00:14:58.322 11:09:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.322 11:09:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.322 11:09:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.322 11:09:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.322 11:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:58.322 [2024-12-13 11:09:18.682794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.322 [2024-12-13 11:09:18.682838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.322 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.322 [2024-12-13 11:09:18.734476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.322 [2024-12-13 11:09:18.800213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.322 [2024-12-13 11:09:18.800346] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.322 [2024-12-13 11:09:18.800354] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.322 [2024-12-13 11:09:18.800359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.322 [2024-12-13 11:09:18.800401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.322 [2024-12-13 11:09:18.800421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.322 [2024-12-13 11:09:18.800510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.322 [2024-12-13 11:09:18.800511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.258 11:09:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.258 11:09:19 -- common/autotest_common.sh@862 -- # return 0 00:14:59.258 11:09:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.258 11:09:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.258 11:09:19 -- common/autotest_common.sh@10 -- # set +x 00:14:59.258 11:09:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.258 11:09:19 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:59.258 11:09:19 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2522 00:14:59.258 [2024-12-13 11:09:19.657947] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:59.258 11:09:19 -- target/invalid.sh@40 -- # out='request: 00:14:59.258 { 00:14:59.258 "nqn": "nqn.2016-06.io.spdk:cnode2522", 00:14:59.258 "tgt_name": "foobar", 00:14:59.258 "method": "nvmf_create_subsystem", 00:14:59.259 "req_id": 1 00:14:59.259 } 00:14:59.259 Got JSON-RPC error response 00:14:59.259 response: 00:14:59.259 { 00:14:59.259 "code": -32603, 00:14:59.259 "message": "Unable to find target foobar" 00:14:59.259 }' 00:14:59.259 11:09:19 -- target/invalid.sh@41 -- # [[ request: 00:14:59.259 { 00:14:59.259 "nqn": "nqn.2016-06.io.spdk:cnode2522", 00:14:59.259 "tgt_name": "foobar", 00:14:59.259 "method": "nvmf_create_subsystem", 00:14:59.259 "req_id": 1 00:14:59.259 } 00:14:59.259 Got JSON-RPC error response 00:14:59.259 response: 00:14:59.259 { 00:14:59.259 "code": -32603, 00:14:59.259 "message": "Unable to find target foobar" 00:14:59.259 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:59.259 11:09:19 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:59.259 11:09:19 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24875 00:14:59.518 [2024-12-13 11:09:19.838556] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24875: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:59.518 11:09:19 -- target/invalid.sh@45 -- # out='request: 00:14:59.518 { 00:14:59.518 "nqn": "nqn.2016-06.io.spdk:cnode24875", 00:14:59.518 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:59.518 "method": "nvmf_create_subsystem", 00:14:59.518 "req_id": 1 00:14:59.518 } 00:14:59.518 Got JSON-RPC error response 00:14:59.518 response: 00:14:59.518 { 00:14:59.518 "code": -32602, 00:14:59.518 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:59.518 }' 00:14:59.518 11:09:19 -- target/invalid.sh@46 -- # [[ request: 00:14:59.518 { 00:14:59.518 "nqn": "nqn.2016-06.io.spdk:cnode24875", 00:14:59.518 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:59.518 "method": "nvmf_create_subsystem", 00:14:59.518 "req_id": 1 00:14:59.518 } 00:14:59.518 Got JSON-RPC error response 00:14:59.518 response: 00:14:59.518 { 00:14:59.518 "code": -32602, 00:14:59.518 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:59.518 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:59.518 11:09:19 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:59.518 11:09:19 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9848 00:14:59.518 [2024-12-13 11:09:20.019117] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9848: invalid model number 'SPDK_Controller' 00:14:59.518 11:09:20 -- target/invalid.sh@50 -- # out='request: 00:14:59.518 { 00:14:59.518 "nqn": "nqn.2016-06.io.spdk:cnode9848", 00:14:59.518 "model_number": "SPDK_Controller\u001f", 00:14:59.518 "method": "nvmf_create_subsystem", 00:14:59.519 "req_id": 1 00:14:59.519 } 00:14:59.519 Got JSON-RPC error response 00:14:59.519 response: 00:14:59.519 { 00:14:59.519 "code": -32602, 00:14:59.519 "message": "Invalid MN SPDK_Controller\u001f" 00:14:59.519 }' 00:14:59.519 11:09:20 -- target/invalid.sh@51 -- # [[ request: 00:14:59.519 { 00:14:59.519 "nqn": "nqn.2016-06.io.spdk:cnode9848", 00:14:59.519 "model_number": "SPDK_Controller\u001f", 00:14:59.519 "method": "nvmf_create_subsystem", 00:14:59.519 "req_id": 1 00:14:59.519 } 00:14:59.519 Got JSON-RPC error response 00:14:59.519 response: 00:14:59.519 { 00:14:59.519 "code": -32602, 00:14:59.519 "message": "Invalid MN SPDK_Controller\u001f" 00:14:59.519 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:59.519 11:09:20 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:59.519 11:09:20 -- target/invalid.sh@19 -- # local length=21 ll 00:14:59.519 11:09:20 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:59.519 11:09:20 -- target/invalid.sh@21 -- # local chars 00:14:59.519 11:09:20 -- target/invalid.sh@22 -- # local string 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # printf %x 76 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # string+=L 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # printf %x 56 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # string+=8 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # printf %x 66 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # string+=B 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # printf %x 45 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # string+=- 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # printf %x 118 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:59.519 11:09:20 -- target/invalid.sh@25 -- # string+=v 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.519 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 85 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=U 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 39 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=\' 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 118 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=v 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 116 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=t 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 53 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=5 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 112 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=p 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 65 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=A 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 46 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=. 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 105 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=i 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 85 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=U 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 107 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=k 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 106 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=j 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 56 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=8 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 69 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=E 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 78 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=N 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # printf %x 69 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:59.778 11:09:20 -- target/invalid.sh@25 -- # string+=E 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.778 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.778 11:09:20 -- target/invalid.sh@28 -- # [[ L == \- ]] 00:14:59.778 11:09:20 -- target/invalid.sh@31 -- # echo 'L8B-vU'\''vt5pA.iUkj8ENE' 00:14:59.778 11:09:20 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'L8B-vU'\''vt5pA.iUkj8ENE' nqn.2016-06.io.spdk:cnode660 00:14:59.778 [2024-12-13 11:09:20.340138] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode660: invalid serial number 'L8B-vU'vt5pA.iUkj8ENE' 00:15:00.038 11:09:20 -- target/invalid.sh@54 -- # out='request: 00:15:00.038 { 00:15:00.038 "nqn": "nqn.2016-06.io.spdk:cnode660", 00:15:00.038 "serial_number": "L8B-vU'\''vt5pA.iUkj8ENE", 00:15:00.038 "method": "nvmf_create_subsystem", 00:15:00.038 "req_id": 1 00:15:00.038 } 00:15:00.038 Got JSON-RPC error response 00:15:00.038 response: 00:15:00.038 { 00:15:00.038 "code": -32602, 00:15:00.038 "message": "Invalid SN L8B-vU'\''vt5pA.iUkj8ENE" 00:15:00.038 }' 00:15:00.038 11:09:20 -- target/invalid.sh@55 -- # [[ request: 00:15:00.038 { 00:15:00.038 "nqn": "nqn.2016-06.io.spdk:cnode660", 00:15:00.038 "serial_number": "L8B-vU'vt5pA.iUkj8ENE", 00:15:00.038 "method": "nvmf_create_subsystem", 00:15:00.038 "req_id": 1 00:15:00.038 } 00:15:00.038 Got JSON-RPC error response 00:15:00.038 response: 00:15:00.038 { 00:15:00.038 "code": -32602, 00:15:00.038 "message": "Invalid SN L8B-vU'vt5pA.iUkj8ENE" 00:15:00.038 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:00.038 11:09:20 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:00.038 11:09:20 -- target/invalid.sh@19 -- # local length=41 ll 00:15:00.038 11:09:20 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:00.038 11:09:20 -- target/invalid.sh@21 -- # local chars 00:15:00.038 11:09:20 -- target/invalid.sh@22 -- # local string 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 110 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=n 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 62 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+='>' 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 70 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=F 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 120 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=x 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 42 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+='*' 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 95 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=_ 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 49 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=1 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 73 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=I 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 120 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=x 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 68 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=D 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 35 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+='#' 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 51 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=3 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 45 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=- 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 118 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=v 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 48 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=0 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 79 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=O 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 43 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=+ 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 102 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=f 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 103 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=g 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 39 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=\' 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # printf %x 107 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:00.038 11:09:20 -- target/invalid.sh@25 -- # string+=k 00:15:00.038 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 103 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=g 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 65 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=A 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 101 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=e 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 116 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=t 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 41 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=')' 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 127 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 76 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=L 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 124 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+='|' 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 115 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=s 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 122 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=z 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 70 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=F 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 41 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=')' 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 82 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=R 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 119 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=w 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 123 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+='{' 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 111 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=o 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 119 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # string+=w 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.039 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # printf %x 123 00:15:00.039 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # string+='{' 00:15:00.298 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.298 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # printf %x 58 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # string+=: 00:15:00.298 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.298 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # printf %x 60 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:00.298 11:09:20 -- target/invalid.sh@25 -- # string+='<' 00:15:00.298 11:09:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.298 11:09:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.298 11:09:20 -- target/invalid.sh@28 -- # [[ n == \- ]] 00:15:00.299 11:09:20 -- target/invalid.sh@31 -- # echo 'n>Fx*_1IxD#3-v0O+fg'\''kgAet)L|szF)Rw{ow{:<' 00:15:00.299 11:09:20 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'n>Fx*_1IxD#3-v0O+fg'\''kgAet)L|szF)Rw{ow{:<' nqn.2016-06.io.spdk:cnode17864 00:15:00.299 [2024-12-13 11:09:20.761567] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17864: invalid model number 'n>Fx*_1IxD#3-v0O+fg'kgAet)L|szF)Rw{ow{:<' 00:15:00.299 11:09:20 -- target/invalid.sh@58 -- # out='request: 00:15:00.299 { 00:15:00.299 "nqn": "nqn.2016-06.io.spdk:cnode17864", 00:15:00.299 "model_number": "n>Fx*_1IxD#3-v0O+fg'\''kgAet)\u007fL|szF)Rw{ow{:<", 00:15:00.299 "method": "nvmf_create_subsystem", 00:15:00.299 "req_id": 1 00:15:00.299 } 00:15:00.299 Got JSON-RPC error response 00:15:00.299 response: 00:15:00.299 { 00:15:00.299 "code": -32602, 00:15:00.299 "message": "Invalid MN n>Fx*_1IxD#3-v0O+fg'\''kgAet)\u007fL|szF)Rw{ow{:<" 00:15:00.299 }' 00:15:00.299 11:09:20 -- target/invalid.sh@59 -- # [[ request: 00:15:00.299 { 00:15:00.299 "nqn": "nqn.2016-06.io.spdk:cnode17864", 00:15:00.299 "model_number": "n>Fx*_1IxD#3-v0O+fg'kgAet)\u007fL|szF)Rw{ow{:<", 00:15:00.299 "method": "nvmf_create_subsystem", 00:15:00.299 "req_id": 1 00:15:00.299 } 00:15:00.299 Got JSON-RPC error response 00:15:00.299 response: 00:15:00.299 { 00:15:00.299 "code": -32602, 00:15:00.299 "message": "Invalid MN n>Fx*_1IxD#3-v0O+fg'kgAet)\u007fL|szF)Rw{ow{:<" 00:15:00.299 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:00.299 11:09:20 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:00.557 [2024-12-13 11:09:20.953432] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c9b230/0x1c9f720) succeed. 00:15:00.557 [2024-12-13 11:09:20.961468] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c9c820/0x1ce0dc0) succeed. 00:15:00.558 11:09:21 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:00.816 11:09:21 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:00.816 11:09:21 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:00.816 192.168.100.9' 00:15:00.816 11:09:21 -- target/invalid.sh@67 -- # head -n 1 00:15:00.816 11:09:21 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:00.816 11:09:21 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:01.075 [2024-12-13 11:09:21.422987] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:01.075 11:09:21 -- target/invalid.sh@69 -- # out='request: 00:15:01.075 { 00:15:01.075 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:01.075 "listen_address": { 00:15:01.075 "trtype": "rdma", 00:15:01.075 "traddr": "192.168.100.8", 00:15:01.075 "trsvcid": "4421" 00:15:01.075 }, 00:15:01.075 "method": "nvmf_subsystem_remove_listener", 00:15:01.075 "req_id": 1 00:15:01.075 } 00:15:01.075 Got JSON-RPC error response 00:15:01.075 response: 00:15:01.075 { 00:15:01.075 "code": -32602, 00:15:01.075 "message": "Invalid parameters" 00:15:01.075 }' 00:15:01.075 11:09:21 -- target/invalid.sh@70 -- # [[ request: 00:15:01.075 { 00:15:01.075 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:01.075 "listen_address": { 00:15:01.075 "trtype": "rdma", 00:15:01.075 "traddr": "192.168.100.8", 00:15:01.075 "trsvcid": "4421" 00:15:01.075 }, 00:15:01.075 "method": "nvmf_subsystem_remove_listener", 00:15:01.075 "req_id": 1 00:15:01.075 } 00:15:01.075 Got JSON-RPC error response 00:15:01.075 response: 00:15:01.075 { 00:15:01.075 "code": -32602, 00:15:01.075 "message": "Invalid parameters" 00:15:01.075 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:01.075 11:09:21 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16112 -i 0 00:15:01.075 [2024-12-13 11:09:21.607563] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16112: invalid cntlid range [0-65519] 00:15:01.075 11:09:21 -- target/invalid.sh@73 -- # out='request: 00:15:01.075 { 00:15:01.075 "nqn": "nqn.2016-06.io.spdk:cnode16112", 00:15:01.075 "min_cntlid": 0, 00:15:01.075 "method": "nvmf_create_subsystem", 00:15:01.075 "req_id": 1 00:15:01.075 } 00:15:01.075 Got JSON-RPC error response 00:15:01.075 response: 00:15:01.075 { 00:15:01.075 "code": -32602, 00:15:01.075 "message": "Invalid cntlid range [0-65519]" 00:15:01.075 }' 00:15:01.075 11:09:21 -- target/invalid.sh@74 -- # [[ request: 00:15:01.075 { 00:15:01.075 "nqn": "nqn.2016-06.io.spdk:cnode16112", 00:15:01.075 "min_cntlid": 0, 00:15:01.075 "method": "nvmf_create_subsystem", 00:15:01.075 "req_id": 1 00:15:01.075 } 00:15:01.075 Got JSON-RPC error response 00:15:01.075 response: 00:15:01.075 { 00:15:01.075 "code": -32602, 00:15:01.075 "message": "Invalid cntlid range [0-65519]" 00:15:01.075 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.075 11:09:21 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19233 -i 65520 00:15:01.334 [2024-12-13 11:09:21.792201] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19233: invalid cntlid range [65520-65519] 00:15:01.334 11:09:21 -- target/invalid.sh@75 -- # out='request: 00:15:01.334 { 00:15:01.334 "nqn": "nqn.2016-06.io.spdk:cnode19233", 00:15:01.334 "min_cntlid": 65520, 00:15:01.334 "method": "nvmf_create_subsystem", 00:15:01.334 "req_id": 1 00:15:01.334 } 00:15:01.334 Got JSON-RPC error response 00:15:01.334 response: 00:15:01.334 { 00:15:01.334 "code": -32602, 00:15:01.334 "message": "Invalid cntlid range [65520-65519]" 00:15:01.334 }' 00:15:01.334 11:09:21 -- target/invalid.sh@76 -- # [[ request: 00:15:01.334 { 00:15:01.334 "nqn": "nqn.2016-06.io.spdk:cnode19233", 00:15:01.334 "min_cntlid": 65520, 00:15:01.334 "method": "nvmf_create_subsystem", 00:15:01.334 "req_id": 1 00:15:01.334 } 00:15:01.334 Got JSON-RPC error response 00:15:01.334 response: 00:15:01.334 { 00:15:01.334 "code": -32602, 00:15:01.334 "message": "Invalid cntlid range [65520-65519]" 00:15:01.334 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.334 11:09:21 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16017 -I 0 00:15:01.593 [2024-12-13 11:09:21.972821] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16017: invalid cntlid range [1-0] 00:15:01.593 11:09:21 -- target/invalid.sh@77 -- # out='request: 00:15:01.593 { 00:15:01.593 "nqn": "nqn.2016-06.io.spdk:cnode16017", 00:15:01.593 "max_cntlid": 0, 00:15:01.593 "method": "nvmf_create_subsystem", 00:15:01.593 "req_id": 1 00:15:01.593 } 00:15:01.593 Got JSON-RPC error response 00:15:01.593 response: 00:15:01.593 { 00:15:01.593 "code": -32602, 00:15:01.593 "message": "Invalid cntlid range [1-0]" 00:15:01.593 }' 00:15:01.593 11:09:22 -- target/invalid.sh@78 -- # [[ request: 00:15:01.593 { 00:15:01.593 "nqn": "nqn.2016-06.io.spdk:cnode16017", 00:15:01.593 "max_cntlid": 0, 00:15:01.593 "method": "nvmf_create_subsystem", 00:15:01.593 "req_id": 1 00:15:01.593 } 00:15:01.593 Got JSON-RPC error response 00:15:01.593 response: 00:15:01.594 { 00:15:01.594 "code": -32602, 00:15:01.594 "message": "Invalid cntlid range [1-0]" 00:15:01.594 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.594 11:09:22 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27373 -I 65520 00:15:01.594 [2024-12-13 11:09:22.153434] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27373: invalid cntlid range [1-65520] 00:15:01.853 11:09:22 -- target/invalid.sh@79 -- # out='request: 00:15:01.853 { 00:15:01.853 "nqn": "nqn.2016-06.io.spdk:cnode27373", 00:15:01.853 "max_cntlid": 65520, 00:15:01.853 "method": "nvmf_create_subsystem", 00:15:01.853 "req_id": 1 00:15:01.853 } 00:15:01.853 Got JSON-RPC error response 00:15:01.853 response: 00:15:01.853 { 00:15:01.853 "code": -32602, 00:15:01.853 "message": "Invalid cntlid range [1-65520]" 00:15:01.853 }' 00:15:01.853 11:09:22 -- target/invalid.sh@80 -- # [[ request: 00:15:01.853 { 00:15:01.853 "nqn": "nqn.2016-06.io.spdk:cnode27373", 00:15:01.853 "max_cntlid": 65520, 00:15:01.853 "method": "nvmf_create_subsystem", 00:15:01.853 "req_id": 1 00:15:01.853 } 00:15:01.853 Got JSON-RPC error response 00:15:01.853 response: 00:15:01.853 { 00:15:01.853 "code": -32602, 00:15:01.853 "message": "Invalid cntlid range [1-65520]" 00:15:01.853 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.853 11:09:22 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10147 -i 6 -I 5 00:15:01.853 [2024-12-13 11:09:22.330079] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10147: invalid cntlid range [6-5] 00:15:01.853 11:09:22 -- target/invalid.sh@83 -- # out='request: 00:15:01.853 { 00:15:01.853 "nqn": "nqn.2016-06.io.spdk:cnode10147", 00:15:01.853 "min_cntlid": 6, 00:15:01.853 "max_cntlid": 5, 00:15:01.853 "method": "nvmf_create_subsystem", 00:15:01.853 "req_id": 1 00:15:01.853 } 00:15:01.853 Got JSON-RPC error response 00:15:01.853 response: 00:15:01.853 { 00:15:01.853 "code": -32602, 00:15:01.853 "message": "Invalid cntlid range [6-5]" 00:15:01.853 }' 00:15:01.853 11:09:22 -- target/invalid.sh@84 -- # [[ request: 00:15:01.853 { 00:15:01.853 "nqn": "nqn.2016-06.io.spdk:cnode10147", 00:15:01.853 "min_cntlid": 6, 00:15:01.853 "max_cntlid": 5, 00:15:01.853 "method": "nvmf_create_subsystem", 00:15:01.853 "req_id": 1 00:15:01.853 } 00:15:01.853 Got JSON-RPC error response 00:15:01.853 response: 00:15:01.853 { 00:15:01.853 "code": -32602, 00:15:01.853 "message": "Invalid cntlid range [6-5]" 00:15:01.853 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.853 11:09:22 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:02.112 11:09:22 -- target/invalid.sh@87 -- # out='request: 00:15:02.112 { 00:15:02.112 "name": "foobar", 00:15:02.112 "method": "nvmf_delete_target", 00:15:02.112 "req_id": 1 00:15:02.112 } 00:15:02.112 Got JSON-RPC error response 00:15:02.112 response: 00:15:02.112 { 00:15:02.112 "code": -32602, 00:15:02.112 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:02.112 }' 00:15:02.112 11:09:22 -- target/invalid.sh@88 -- # [[ request: 00:15:02.112 { 00:15:02.112 "name": "foobar", 00:15:02.112 "method": "nvmf_delete_target", 00:15:02.112 "req_id": 1 00:15:02.112 } 00:15:02.112 Got JSON-RPC error response 00:15:02.112 response: 00:15:02.112 { 00:15:02.112 "code": -32602, 00:15:02.112 "message": "The specified target doesn't exist, cannot delete it." 00:15:02.112 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:02.112 11:09:22 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:02.112 11:09:22 -- target/invalid.sh@91 -- # nvmftestfini 00:15:02.112 11:09:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.112 11:09:22 -- nvmf/common.sh@116 -- # sync 00:15:02.112 11:09:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:02.112 11:09:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:02.112 11:09:22 -- nvmf/common.sh@119 -- # set +e 00:15:02.112 11:09:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.112 11:09:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:02.112 rmmod nvme_rdma 00:15:02.112 rmmod nvme_fabrics 00:15:02.112 11:09:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.112 11:09:22 -- nvmf/common.sh@123 -- # set -e 00:15:02.112 11:09:22 -- nvmf/common.sh@124 -- # return 0 00:15:02.112 11:09:22 -- nvmf/common.sh@477 -- # '[' -n 1569395 ']' 00:15:02.112 11:09:22 -- nvmf/common.sh@478 -- # killprocess 1569395 00:15:02.112 11:09:22 -- common/autotest_common.sh@936 -- # '[' -z 1569395 ']' 00:15:02.112 11:09:22 -- common/autotest_common.sh@940 -- # kill -0 1569395 00:15:02.112 11:09:22 -- common/autotest_common.sh@941 -- # uname 00:15:02.112 11:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:02.112 11:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1569395 00:15:02.112 11:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:02.112 11:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:02.112 11:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1569395' 00:15:02.112 killing process with pid 1569395 00:15:02.112 11:09:22 -- common/autotest_common.sh@955 -- # kill 1569395 00:15:02.112 11:09:22 -- common/autotest_common.sh@960 -- # wait 1569395 00:15:02.372 11:09:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.372 11:09:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:02.372 00:15:02.372 real 0m9.883s 00:15:02.372 user 0m19.703s 00:15:02.372 sys 0m5.098s 00:15:02.372 11:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.372 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:15:02.372 ************************************ 00:15:02.372 END TEST nvmf_invalid 00:15:02.372 ************************************ 00:15:02.372 11:09:22 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:02.372 11:09:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.372 11:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.372 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:15:02.372 ************************************ 00:15:02.372 START TEST nvmf_abort 00:15:02.372 ************************************ 00:15:02.372 11:09:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:02.631 * Looking for test storage... 00:15:02.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:02.631 11:09:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:02.631 11:09:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:02.631 11:09:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:02.632 11:09:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:02.632 11:09:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:02.632 11:09:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:02.632 11:09:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:02.632 11:09:23 -- scripts/common.sh@335 -- # IFS=.-: 00:15:02.632 11:09:23 -- scripts/common.sh@335 -- # read -ra ver1 00:15:02.632 11:09:23 -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.632 11:09:23 -- scripts/common.sh@336 -- # read -ra ver2 00:15:02.632 11:09:23 -- scripts/common.sh@337 -- # local 'op=<' 00:15:02.632 11:09:23 -- scripts/common.sh@339 -- # ver1_l=2 00:15:02.632 11:09:23 -- scripts/common.sh@340 -- # ver2_l=1 00:15:02.632 11:09:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:02.632 11:09:23 -- scripts/common.sh@343 -- # case "$op" in 00:15:02.632 11:09:23 -- scripts/common.sh@344 -- # : 1 00:15:02.632 11:09:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:02.632 11:09:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.632 11:09:23 -- scripts/common.sh@364 -- # decimal 1 00:15:02.632 11:09:23 -- scripts/common.sh@352 -- # local d=1 00:15:02.632 11:09:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.632 11:09:23 -- scripts/common.sh@354 -- # echo 1 00:15:02.632 11:09:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:02.632 11:09:23 -- scripts/common.sh@365 -- # decimal 2 00:15:02.632 11:09:23 -- scripts/common.sh@352 -- # local d=2 00:15:02.632 11:09:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.632 11:09:23 -- scripts/common.sh@354 -- # echo 2 00:15:02.632 11:09:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:02.632 11:09:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:02.632 11:09:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:02.632 11:09:23 -- scripts/common.sh@367 -- # return 0 00:15:02.632 11:09:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.632 11:09:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:02.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.632 --rc genhtml_branch_coverage=1 00:15:02.632 --rc genhtml_function_coverage=1 00:15:02.632 --rc genhtml_legend=1 00:15:02.632 --rc geninfo_all_blocks=1 00:15:02.632 --rc geninfo_unexecuted_blocks=1 00:15:02.632 00:15:02.632 ' 00:15:02.632 11:09:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:02.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.632 --rc genhtml_branch_coverage=1 00:15:02.632 --rc genhtml_function_coverage=1 00:15:02.632 --rc genhtml_legend=1 00:15:02.632 --rc geninfo_all_blocks=1 00:15:02.632 --rc geninfo_unexecuted_blocks=1 00:15:02.632 00:15:02.632 ' 00:15:02.632 11:09:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:02.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.632 --rc genhtml_branch_coverage=1 00:15:02.632 --rc genhtml_function_coverage=1 00:15:02.632 --rc genhtml_legend=1 00:15:02.632 --rc geninfo_all_blocks=1 00:15:02.632 --rc geninfo_unexecuted_blocks=1 00:15:02.632 00:15:02.632 ' 00:15:02.632 11:09:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:02.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.632 --rc genhtml_branch_coverage=1 00:15:02.632 --rc genhtml_function_coverage=1 00:15:02.632 --rc genhtml_legend=1 00:15:02.632 --rc geninfo_all_blocks=1 00:15:02.632 --rc geninfo_unexecuted_blocks=1 00:15:02.632 00:15:02.632 ' 00:15:02.632 11:09:23 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.632 11:09:23 -- nvmf/common.sh@7 -- # uname -s 00:15:02.632 11:09:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.632 11:09:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.632 11:09:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.632 11:09:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.632 11:09:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.632 11:09:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.632 11:09:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.632 11:09:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.632 11:09:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.632 11:09:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.632 11:09:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:02.632 11:09:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:02.632 11:09:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.632 11:09:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.632 11:09:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.632 11:09:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:02.632 11:09:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.632 11:09:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.632 11:09:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.632 11:09:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.632 11:09:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.632 11:09:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.632 11:09:23 -- paths/export.sh@5 -- # export PATH 00:15:02.632 11:09:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.632 11:09:23 -- nvmf/common.sh@46 -- # : 0 00:15:02.632 11:09:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.632 11:09:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.632 11:09:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.632 11:09:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.632 11:09:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.632 11:09:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.632 11:09:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.632 11:09:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.632 11:09:23 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.632 11:09:23 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:02.632 11:09:23 -- target/abort.sh@14 -- # nvmftestinit 00:15:02.632 11:09:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:02.632 11:09:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.632 11:09:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.632 11:09:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.632 11:09:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.632 11:09:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.632 11:09:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.632 11:09:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.632 11:09:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:02.632 11:09:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:02.632 11:09:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:02.632 11:09:23 -- common/autotest_common.sh@10 -- # set +x 00:15:07.908 11:09:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:07.908 11:09:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:07.908 11:09:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:07.908 11:09:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:07.908 11:09:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:07.908 11:09:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:07.908 11:09:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:07.908 11:09:28 -- nvmf/common.sh@294 -- # net_devs=() 00:15:07.908 11:09:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:07.908 11:09:28 -- nvmf/common.sh@295 -- # e810=() 00:15:07.908 11:09:28 -- nvmf/common.sh@295 -- # local -ga e810 00:15:07.908 11:09:28 -- nvmf/common.sh@296 -- # x722=() 00:15:07.908 11:09:28 -- nvmf/common.sh@296 -- # local -ga x722 00:15:07.908 11:09:28 -- nvmf/common.sh@297 -- # mlx=() 00:15:07.908 11:09:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:07.908 11:09:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.908 11:09:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:07.908 11:09:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:07.908 11:09:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:07.908 11:09:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:07.908 11:09:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:07.908 11:09:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:07.908 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:07.908 11:09:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:07.908 11:09:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:07.908 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:07.908 11:09:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:07.908 11:09:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:07.908 11:09:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.908 11:09:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.908 11:09:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.908 11:09:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:07.908 Found net devices under 0000:18:00.0: mlx_0_0 00:15:07.908 11:09:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.908 11:09:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.908 11:09:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:07.908 11:09:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.908 11:09:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:07.908 Found net devices under 0000:18:00.1: mlx_0_1 00:15:07.908 11:09:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.908 11:09:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:07.908 11:09:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:07.908 11:09:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:07.908 11:09:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:07.908 11:09:28 -- nvmf/common.sh@57 -- # uname 00:15:07.908 11:09:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:07.908 11:09:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:07.908 11:09:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:07.908 11:09:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:07.908 11:09:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:07.908 11:09:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:07.908 11:09:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:07.908 11:09:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:07.908 11:09:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:07.908 11:09:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:07.908 11:09:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:07.908 11:09:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:07.908 11:09:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:07.908 11:09:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:07.908 11:09:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:07.908 11:09:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:07.908 11:09:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:07.908 11:09:28 -- nvmf/common.sh@104 -- # continue 2 00:15:07.908 11:09:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:07.908 11:09:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:07.908 11:09:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:07.908 11:09:28 -- nvmf/common.sh@104 -- # continue 2 00:15:07.908 11:09:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:07.908 11:09:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:07.908 11:09:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:07.908 11:09:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:07.908 11:09:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:07.908 11:09:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:07.909 11:09:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:07.909 11:09:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:07.909 11:09:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:07.909 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:07.909 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:07.909 altname enp24s0f0np0 00:15:07.909 altname ens785f0np0 00:15:07.909 inet 192.168.100.8/24 scope global mlx_0_0 00:15:07.909 valid_lft forever preferred_lft forever 00:15:07.909 11:09:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.168 11:09:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.168 11:09:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:08.168 11:09:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:08.168 11:09:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:08.168 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.168 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:08.168 altname enp24s0f1np1 00:15:08.168 altname ens785f1np1 00:15:08.168 inet 192.168.100.9/24 scope global mlx_0_1 00:15:08.168 valid_lft forever preferred_lft forever 00:15:08.168 11:09:28 -- nvmf/common.sh@410 -- # return 0 00:15:08.168 11:09:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.168 11:09:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:08.168 11:09:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:08.168 11:09:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:08.168 11:09:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:08.168 11:09:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.168 11:09:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.168 11:09:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.168 11:09:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.168 11:09:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.168 11:09:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.168 11:09:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.168 11:09:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.168 11:09:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.168 11:09:28 -- nvmf/common.sh@104 -- # continue 2 00:15:08.168 11:09:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.168 11:09:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.168 11:09:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.168 11:09:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.168 11:09:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.168 11:09:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@104 -- # continue 2 00:15:08.168 11:09:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.168 11:09:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:08.168 11:09:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.168 11:09:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.168 11:09:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.168 11:09:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.168 11:09:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:08.168 192.168.100.9' 00:15:08.168 11:09:28 -- nvmf/common.sh@445 -- # head -n 1 00:15:08.168 11:09:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:08.168 192.168.100.9' 00:15:08.168 11:09:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:08.168 11:09:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:08.168 192.168.100.9' 00:15:08.168 11:09:28 -- nvmf/common.sh@446 -- # tail -n +2 00:15:08.168 11:09:28 -- nvmf/common.sh@446 -- # head -n 1 00:15:08.168 11:09:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:08.168 11:09:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:08.168 11:09:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:08.168 11:09:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:08.168 11:09:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:08.168 11:09:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:08.168 11:09:28 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:08.168 11:09:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.168 11:09:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.168 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:15:08.168 11:09:28 -- nvmf/common.sh@469 -- # nvmfpid=1573595 00:15:08.168 11:09:28 -- nvmf/common.sh@470 -- # waitforlisten 1573595 00:15:08.168 11:09:28 -- common/autotest_common.sh@829 -- # '[' -z 1573595 ']' 00:15:08.168 11:09:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.168 11:09:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.168 11:09:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.168 11:09:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.168 11:09:28 -- common/autotest_common.sh@10 -- # set +x 00:15:08.168 11:09:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:08.168 [2024-12-13 11:09:28.625245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.168 [2024-12-13 11:09:28.625308] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.168 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.168 [2024-12-13 11:09:28.675746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.428 [2024-12-13 11:09:28.748215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.428 [2024-12-13 11:09:28.748324] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.428 [2024-12-13 11:09:28.748331] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.428 [2024-12-13 11:09:28.748337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.428 [2024-12-13 11:09:28.748383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.428 [2024-12-13 11:09:28.748401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.428 [2024-12-13 11:09:28.748402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.996 11:09:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.996 11:09:29 -- common/autotest_common.sh@862 -- # return 0 00:15:08.996 11:09:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.996 11:09:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.996 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.996 11:09:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.996 11:09:29 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:08.996 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.996 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:08.996 [2024-12-13 11:09:29.485585] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x72a140/0x72e630) succeed. 00:15:08.996 [2024-12-13 11:09:29.493543] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x72b690/0x76fcd0) succeed. 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:09.255 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.255 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 Malloc0 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:09.255 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.255 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 Delay0 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:09.255 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.255 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:09.255 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.255 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:09.255 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.255 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 [2024-12-13 11:09:29.639105] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:09.255 11:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.255 11:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 11:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.255 11:09:29 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:09.255 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.255 [2024-12-13 11:09:29.727003] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:11.787 Initializing NVMe Controllers 00:15:11.787 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:11.787 controller IO queue size 128 less than required 00:15:11.787 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:11.787 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:11.787 Initialization complete. Launching workers. 00:15:11.787 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 56982 00:15:11.787 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 57043, failed to submit 62 00:15:11.787 success 56982, unsuccess 61, failed 0 00:15:11.787 11:09:31 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:11.787 11:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.787 11:09:31 -- common/autotest_common.sh@10 -- # set +x 00:15:11.787 11:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.787 11:09:31 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:11.787 11:09:31 -- target/abort.sh@38 -- # nvmftestfini 00:15:11.787 11:09:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:11.787 11:09:31 -- nvmf/common.sh@116 -- # sync 00:15:11.787 11:09:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:11.787 11:09:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:11.787 11:09:31 -- nvmf/common.sh@119 -- # set +e 00:15:11.787 11:09:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:11.787 11:09:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:11.787 rmmod nvme_rdma 00:15:11.787 rmmod nvme_fabrics 00:15:11.787 11:09:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:11.787 11:09:31 -- nvmf/common.sh@123 -- # set -e 00:15:11.787 11:09:31 -- nvmf/common.sh@124 -- # return 0 00:15:11.787 11:09:31 -- nvmf/common.sh@477 -- # '[' -n 1573595 ']' 00:15:11.788 11:09:31 -- nvmf/common.sh@478 -- # killprocess 1573595 00:15:11.788 11:09:31 -- common/autotest_common.sh@936 -- # '[' -z 1573595 ']' 00:15:11.788 11:09:31 -- common/autotest_common.sh@940 -- # kill -0 1573595 00:15:11.788 11:09:31 -- common/autotest_common.sh@941 -- # uname 00:15:11.788 11:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:11.788 11:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1573595 00:15:11.788 11:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:11.788 11:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:11.788 11:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1573595' 00:15:11.788 killing process with pid 1573595 00:15:11.788 11:09:31 -- common/autotest_common.sh@955 -- # kill 1573595 00:15:11.788 11:09:31 -- common/autotest_common.sh@960 -- # wait 1573595 00:15:11.788 11:09:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.788 11:09:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:11.788 00:15:11.788 real 0m9.348s 00:15:11.788 user 0m14.152s 00:15:11.788 sys 0m4.582s 00:15:11.788 11:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:11.788 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:15:11.788 ************************************ 00:15:11.788 END TEST nvmf_abort 00:15:11.788 ************************************ 00:15:11.788 11:09:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:11.788 11:09:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.788 11:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.788 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:15:11.788 ************************************ 00:15:11.788 START TEST nvmf_ns_hotplug_stress 00:15:11.788 ************************************ 00:15:11.788 11:09:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:11.788 * Looking for test storage... 00:15:11.788 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:11.788 11:09:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:11.788 11:09:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:11.788 11:09:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:12.048 11:09:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:12.048 11:09:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:12.048 11:09:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:12.048 11:09:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:12.048 11:09:32 -- scripts/common.sh@335 -- # IFS=.-: 00:15:12.048 11:09:32 -- scripts/common.sh@335 -- # read -ra ver1 00:15:12.048 11:09:32 -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.048 11:09:32 -- scripts/common.sh@336 -- # read -ra ver2 00:15:12.048 11:09:32 -- scripts/common.sh@337 -- # local 'op=<' 00:15:12.048 11:09:32 -- scripts/common.sh@339 -- # ver1_l=2 00:15:12.048 11:09:32 -- scripts/common.sh@340 -- # ver2_l=1 00:15:12.048 11:09:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:12.048 11:09:32 -- scripts/common.sh@343 -- # case "$op" in 00:15:12.048 11:09:32 -- scripts/common.sh@344 -- # : 1 00:15:12.048 11:09:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:12.048 11:09:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.048 11:09:32 -- scripts/common.sh@364 -- # decimal 1 00:15:12.048 11:09:32 -- scripts/common.sh@352 -- # local d=1 00:15:12.048 11:09:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.048 11:09:32 -- scripts/common.sh@354 -- # echo 1 00:15:12.048 11:09:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:12.048 11:09:32 -- scripts/common.sh@365 -- # decimal 2 00:15:12.048 11:09:32 -- scripts/common.sh@352 -- # local d=2 00:15:12.048 11:09:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.048 11:09:32 -- scripts/common.sh@354 -- # echo 2 00:15:12.048 11:09:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:12.048 11:09:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:12.048 11:09:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:12.048 11:09:32 -- scripts/common.sh@367 -- # return 0 00:15:12.048 11:09:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.048 11:09:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:12.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.048 --rc genhtml_branch_coverage=1 00:15:12.048 --rc genhtml_function_coverage=1 00:15:12.048 --rc genhtml_legend=1 00:15:12.048 --rc geninfo_all_blocks=1 00:15:12.048 --rc geninfo_unexecuted_blocks=1 00:15:12.048 00:15:12.048 ' 00:15:12.048 11:09:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:12.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.048 --rc genhtml_branch_coverage=1 00:15:12.048 --rc genhtml_function_coverage=1 00:15:12.048 --rc genhtml_legend=1 00:15:12.048 --rc geninfo_all_blocks=1 00:15:12.048 --rc geninfo_unexecuted_blocks=1 00:15:12.048 00:15:12.048 ' 00:15:12.048 11:09:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:12.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.048 --rc genhtml_branch_coverage=1 00:15:12.048 --rc genhtml_function_coverage=1 00:15:12.048 --rc genhtml_legend=1 00:15:12.048 --rc geninfo_all_blocks=1 00:15:12.048 --rc geninfo_unexecuted_blocks=1 00:15:12.048 00:15:12.048 ' 00:15:12.048 11:09:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:12.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.048 --rc genhtml_branch_coverage=1 00:15:12.048 --rc genhtml_function_coverage=1 00:15:12.048 --rc genhtml_legend=1 00:15:12.048 --rc geninfo_all_blocks=1 00:15:12.048 --rc geninfo_unexecuted_blocks=1 00:15:12.048 00:15:12.048 ' 00:15:12.048 11:09:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.048 11:09:32 -- nvmf/common.sh@7 -- # uname -s 00:15:12.048 11:09:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.048 11:09:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.048 11:09:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.048 11:09:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.048 11:09:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.048 11:09:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.048 11:09:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.048 11:09:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.048 11:09:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.048 11:09:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.048 11:09:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:12.048 11:09:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:12.048 11:09:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.048 11:09:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.048 11:09:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.048 11:09:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:12.048 11:09:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.048 11:09:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.048 11:09:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.048 11:09:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.048 11:09:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.048 11:09:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.048 11:09:32 -- paths/export.sh@5 -- # export PATH 00:15:12.048 11:09:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.048 11:09:32 -- nvmf/common.sh@46 -- # : 0 00:15:12.048 11:09:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.048 11:09:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.048 11:09:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.048 11:09:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.048 11:09:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.048 11:09:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.048 11:09:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.048 11:09:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.048 11:09:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:12.048 11:09:32 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:12.048 11:09:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:12.048 11:09:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.048 11:09:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.048 11:09:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.048 11:09:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.048 11:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.048 11:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.048 11:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.048 11:09:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:12.048 11:09:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:12.048 11:09:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:12.048 11:09:32 -- common/autotest_common.sh@10 -- # set +x 00:15:17.324 11:09:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.325 11:09:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:17.325 11:09:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:17.325 11:09:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:17.325 11:09:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:17.325 11:09:37 -- nvmf/common.sh@294 -- # net_devs=() 00:15:17.325 11:09:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@295 -- # e810=() 00:15:17.325 11:09:37 -- nvmf/common.sh@295 -- # local -ga e810 00:15:17.325 11:09:37 -- nvmf/common.sh@296 -- # x722=() 00:15:17.325 11:09:37 -- nvmf/common.sh@296 -- # local -ga x722 00:15:17.325 11:09:37 -- nvmf/common.sh@297 -- # mlx=() 00:15:17.325 11:09:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:17.325 11:09:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.325 11:09:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:17.325 11:09:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:17.325 11:09:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:17.325 11:09:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:17.325 11:09:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:17.325 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:17.325 11:09:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.325 11:09:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:17.325 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:17.325 11:09:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.325 11:09:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.325 11:09:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.325 11:09:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:17.325 Found net devices under 0000:18:00.0: mlx_0_0 00:15:17.325 11:09:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.325 11:09:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.325 11:09:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.325 11:09:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:17.325 Found net devices under 0000:18:00.1: mlx_0_1 00:15:17.325 11:09:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.325 11:09:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:17.325 11:09:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:17.325 11:09:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:17.325 11:09:37 -- nvmf/common.sh@57 -- # uname 00:15:17.325 11:09:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:17.325 11:09:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:17.325 11:09:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:17.325 11:09:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:17.325 11:09:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:17.325 11:09:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:17.325 11:09:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:17.325 11:09:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:17.325 11:09:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:17.325 11:09:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.325 11:09:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:17.325 11:09:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:17.325 11:09:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.325 11:09:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:17.325 11:09:37 -- nvmf/common.sh@104 -- # continue 2 00:15:17.325 11:09:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:17.325 11:09:37 -- nvmf/common.sh@104 -- # continue 2 00:15:17.325 11:09:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:17.325 11:09:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:17.325 11:09:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:17.325 11:09:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:17.325 11:09:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.325 11:09:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.325 11:09:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:17.325 11:09:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:17.325 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.325 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:17.325 altname enp24s0f0np0 00:15:17.325 altname ens785f0np0 00:15:17.325 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.325 valid_lft forever preferred_lft forever 00:15:17.325 11:09:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:17.325 11:09:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:17.325 11:09:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:17.325 11:09:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:17.325 11:09:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.325 11:09:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.325 11:09:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:17.325 11:09:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:17.325 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.325 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:17.325 altname enp24s0f1np1 00:15:17.325 altname ens785f1np1 00:15:17.325 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.325 valid_lft forever preferred_lft forever 00:15:17.325 11:09:37 -- nvmf/common.sh@410 -- # return 0 00:15:17.325 11:09:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.325 11:09:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.325 11:09:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:17.325 11:09:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:17.325 11:09:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:17.325 11:09:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:17.325 11:09:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.325 11:09:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:17.325 11:09:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.325 11:09:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.325 11:09:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:17.325 11:09:37 -- nvmf/common.sh@104 -- # continue 2 00:15:17.326 11:09:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:17.326 11:09:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.326 11:09:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.326 11:09:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.326 11:09:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.326 11:09:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:17.326 11:09:37 -- nvmf/common.sh@104 -- # continue 2 00:15:17.326 11:09:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:17.326 11:09:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:17.326 11:09:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:17.326 11:09:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:17.326 11:09:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.326 11:09:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.326 11:09:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:17.326 11:09:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:17.326 11:09:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:17.326 11:09:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:17.326 11:09:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:17.326 11:09:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:17.326 11:09:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.326 192.168.100.9' 00:15:17.326 11:09:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:17.326 192.168.100.9' 00:15:17.326 11:09:37 -- nvmf/common.sh@445 -- # head -n 1 00:15:17.326 11:09:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.326 11:09:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:17.326 192.168.100.9' 00:15:17.326 11:09:37 -- nvmf/common.sh@446 -- # tail -n +2 00:15:17.326 11:09:37 -- nvmf/common.sh@446 -- # head -n 1 00:15:17.326 11:09:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.326 11:09:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:17.326 11:09:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.326 11:09:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:17.326 11:09:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:17.326 11:09:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:17.326 11:09:37 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:17.326 11:09:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.326 11:09:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.326 11:09:37 -- common/autotest_common.sh@10 -- # set +x 00:15:17.326 11:09:37 -- nvmf/common.sh@469 -- # nvmfpid=1577436 00:15:17.326 11:09:37 -- nvmf/common.sh@470 -- # waitforlisten 1577436 00:15:17.326 11:09:37 -- common/autotest_common.sh@829 -- # '[' -z 1577436 ']' 00:15:17.326 11:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.326 11:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.326 11:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.326 11:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.326 11:09:37 -- common/autotest_common.sh@10 -- # set +x 00:15:17.326 11:09:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:17.326 [2024-12-13 11:09:37.821102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.326 [2024-12-13 11:09:37.821144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.326 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.326 [2024-12-13 11:09:37.870637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.585 [2024-12-13 11:09:37.943929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.585 [2024-12-13 11:09:37.944030] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.585 [2024-12-13 11:09:37.944037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.585 [2024-12-13 11:09:37.944043] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.585 [2024-12-13 11:09:37.944081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.585 [2024-12-13 11:09:37.944172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.585 [2024-12-13 11:09:37.944173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.153 11:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.153 11:09:38 -- common/autotest_common.sh@862 -- # return 0 00:15:18.153 11:09:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.153 11:09:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.153 11:09:38 -- common/autotest_common.sh@10 -- # set +x 00:15:18.153 11:09:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.153 11:09:38 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:18.153 11:09:38 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:18.416 [2024-12-13 11:09:38.813412] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfba140/0xfbe630) succeed. 00:15:18.416 [2024-12-13 11:09:38.821460] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfbb690/0xfffcd0) succeed. 00:15:18.416 11:09:38 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:18.675 11:09:39 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:18.934 [2024-12-13 11:09:39.258112] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:18.934 11:09:39 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:18.934 11:09:39 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:19.193 Malloc0 00:15:19.193 11:09:39 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:19.452 Delay0 00:15:19.452 11:09:39 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.452 11:09:40 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:19.711 NULL1 00:15:19.711 11:09:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:19.969 11:09:40 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:19.969 11:09:40 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1577973 00:15:19.969 11:09:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:19.969 11:09:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.969 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.347 Read completed with error (sct=0, sc=11) 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 11:09:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:21.347 11:09:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:21.347 11:09:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:21.347 true 00:15:21.347 11:09:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:21.347 11:09:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.283 11:09:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.542 11:09:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:22.542 11:09:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:22.542 true 00:15:22.542 11:09:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:22.542 11:09:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 11:09:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.737 11:09:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:23.737 11:09:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:23.737 true 00:15:23.737 11:09:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:23.737 11:09:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 11:09:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.748 11:09:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:24.748 11:09:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:25.073 true 00:15:25.073 11:09:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:25.073 11:09:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 11:09:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.008 11:09:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:26.008 11:09:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:26.267 true 00:15:26.267 11:09:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:26.267 11:09:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.203 11:09:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.203 11:09:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:27.203 11:09:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:27.203 true 00:15:27.462 11:09:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:27.462 11:09:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 11:09:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.398 11:09:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:28.398 11:09:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:28.656 true 00:15:28.656 11:09:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:28.656 11:09:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 11:09:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.592 11:09:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:29.592 11:09:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:29.851 true 00:15:29.851 11:09:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:29.851 11:09:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 11:09:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.787 11:09:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:30.787 11:09:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:31.046 true 00:15:31.046 11:09:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:31.046 11:09:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 11:09:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.981 11:09:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:31.981 11:09:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:32.240 true 00:15:32.240 11:09:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:32.240 11:09:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 11:09:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.176 11:09:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:33.176 11:09:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:33.176 true 00:15:33.435 11:09:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:33.435 11:09:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.371 11:09:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.371 11:09:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:34.371 11:09:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:34.371 true 00:15:34.630 11:09:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:34.630 11:09:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.630 11:09:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.888 11:09:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:34.888 11:09:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:34.888 true 00:15:34.888 11:09:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:34.888 11:09:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.146 11:09:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.405 11:09:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:35.405 11:09:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:35.405 true 00:15:35.405 11:09:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:35.405 11:09:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.664 11:09:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.922 11:09:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:35.922 11:09:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:35.922 true 00:15:35.922 11:09:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:35.922 11:09:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.181 11:09:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.439 11:09:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:36.439 11:09:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:36.439 true 00:15:36.439 11:09:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:36.439 11:09:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.698 11:09:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.956 11:09:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:36.956 11:09:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:36.956 true 00:15:36.956 11:09:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:36.956 11:09:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.215 11:09:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.473 11:09:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:37.473 11:09:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:37.473 true 00:15:37.473 11:09:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:37.473 11:09:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.732 11:09:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.732 11:09:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:37.732 11:09:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:37.991 true 00:15:37.991 11:09:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:37.991 11:09:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.249 11:09:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.508 11:09:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:38.508 11:09:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:38.508 true 00:15:38.508 11:09:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:38.508 11:09:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.767 11:09:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.767 11:09:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:38.767 11:09:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:39.026 true 00:15:39.026 11:09:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:39.026 11:09:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.284 11:09:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.284 11:09:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:39.284 11:09:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:39.543 true 00:15:39.543 11:09:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:39.543 11:09:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.802 11:10:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.802 11:10:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:39.802 11:10:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:40.061 true 00:15:40.061 11:10:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:40.061 11:10:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 11:10:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.437 11:10:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:41.437 11:10:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:41.437 true 00:15:41.437 11:10:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:41.437 11:10:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.373 11:10:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.631 11:10:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:42.631 11:10:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:42.631 true 00:15:42.631 11:10:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:42.631 11:10:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.566 11:10:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.825 11:10:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:43.825 11:10:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:43.825 true 00:15:43.825 11:10:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:43.825 11:10:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.761 11:10:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.019 11:10:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:45.019 11:10:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:45.019 true 00:15:45.019 11:10:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:45.019 11:10:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.955 11:10:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.213 11:10:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:46.213 11:10:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:46.213 true 00:15:46.213 11:10:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:46.213 11:10:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.148 11:10:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.407 11:10:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:47.407 11:10:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:47.407 true 00:15:47.407 11:10:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:47.407 11:10:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.343 11:10:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.602 11:10:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:48.602 11:10:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:48.861 true 00:15:48.861 11:10:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:48.861 11:10:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 11:10:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.797 11:10:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:15:49.797 11:10:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:15:50.056 true 00:15:50.056 11:10:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:50.056 11:10:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.993 11:10:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.993 11:10:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:15:50.993 11:10:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:15:51.251 true 00:15:51.251 11:10:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:51.252 11:10:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.252 11:10:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.510 11:10:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:15:51.510 11:10:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:51.769 true 00:15:51.769 11:10:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:51.769 11:10:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.769 11:10:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.028 11:10:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:52.028 11:10:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:52.028 true 00:15:52.287 11:10:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:52.287 11:10:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.287 11:10:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.545 11:10:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:52.545 11:10:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:52.545 true 00:15:52.804 11:10:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:52.804 11:10:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.804 Initializing NVMe Controllers 00:15:52.804 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.804 Controller IO queue size 128, less than required. 00:15:52.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.804 Controller IO queue size 128, less than required. 00:15:52.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.804 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:52.804 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:52.804 Initialization complete. Launching workers. 00:15:52.804 ======================================================== 00:15:52.804 Latency(us) 00:15:52.804 Device Information : IOPS MiB/s Average min max 00:15:52.804 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4901.93 2.39 19826.67 734.71 1124663.81 00:15:52.804 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32564.43 15.90 3930.57 2125.33 202362.99 00:15:52.804 ======================================================== 00:15:52.804 Total : 37466.37 18.29 6010.34 734.71 1124663.81 00:15:52.804 00:15:52.804 11:10:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.063 11:10:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:15:53.063 11:10:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:15:53.322 true 00:15:53.322 11:10:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1577973 00:15:53.322 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1577973) - No such process 00:15:53.322 11:10:13 -- target/ns_hotplug_stress.sh@53 -- # wait 1577973 00:15:53.322 11:10:13 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.322 11:10:13 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:53.580 11:10:13 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:53.580 11:10:13 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:53.580 11:10:13 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:53.580 11:10:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:53.580 11:10:13 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:53.580 null0 00:15:53.839 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:53.839 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:53.839 11:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:53.839 null1 00:15:53.839 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:53.839 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:53.839 11:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:54.098 null2 00:15:54.098 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.098 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.098 11:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:54.098 null3 00:15:54.356 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.356 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.356 11:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:54.356 null4 00:15:54.356 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.356 11:10:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.356 11:10:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:54.619 null5 00:15:54.619 11:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.619 11:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.619 11:10:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:54.619 null6 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:54.879 null7 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@66 -- # wait 1584465 1584467 1584468 1584470 1584472 1584474 1584476 1584478 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:54.879 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:55.138 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:55.397 11:10:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.655 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:55.913 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:56.172 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.431 11:10:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.690 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.691 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:56.949 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.208 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.209 11:10:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.468 11:10:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:57.727 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:57.986 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.245 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:58.246 11:10:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:58.505 11:10:18 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:58.505 11:10:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:58.505 11:10:18 -- nvmf/common.sh@116 -- # sync 00:15:58.505 11:10:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:58.505 11:10:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:58.505 11:10:18 -- nvmf/common.sh@119 -- # set +e 00:15:58.505 11:10:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:58.505 11:10:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:58.505 rmmod nvme_rdma 00:15:58.505 rmmod nvme_fabrics 00:15:58.505 11:10:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:58.505 11:10:19 -- nvmf/common.sh@123 -- # set -e 00:15:58.505 11:10:19 -- nvmf/common.sh@124 -- # return 0 00:15:58.505 11:10:19 -- nvmf/common.sh@477 -- # '[' -n 1577436 ']' 00:15:58.505 11:10:19 -- nvmf/common.sh@478 -- # killprocess 1577436 00:15:58.505 11:10:19 -- common/autotest_common.sh@936 -- # '[' -z 1577436 ']' 00:15:58.505 11:10:19 -- common/autotest_common.sh@940 -- # kill -0 1577436 00:15:58.505 11:10:19 -- common/autotest_common.sh@941 -- # uname 00:15:58.505 11:10:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.505 11:10:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1577436 00:15:58.763 11:10:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:58.763 11:10:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:58.763 11:10:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1577436' 00:15:58.763 killing process with pid 1577436 00:15:58.763 11:10:19 -- common/autotest_common.sh@955 -- # kill 1577436 00:15:58.763 11:10:19 -- common/autotest_common.sh@960 -- # wait 1577436 00:15:59.022 11:10:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.022 11:10:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:59.022 00:15:59.022 real 0m47.071s 00:15:59.022 user 3m19.416s 00:15:59.022 sys 0m11.326s 00:15:59.022 11:10:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:59.022 11:10:19 -- common/autotest_common.sh@10 -- # set +x 00:15:59.022 ************************************ 00:15:59.022 END TEST nvmf_ns_hotplug_stress 00:15:59.022 ************************************ 00:15:59.022 11:10:19 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:15:59.022 11:10:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:59.022 11:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.022 11:10:19 -- common/autotest_common.sh@10 -- # set +x 00:15:59.022 ************************************ 00:15:59.022 START TEST nvmf_connect_stress 00:15:59.022 ************************************ 00:15:59.023 11:10:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:15:59.023 * Looking for test storage... 00:15:59.023 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:59.023 11:10:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:59.023 11:10:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:59.023 11:10:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:59.023 11:10:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:59.023 11:10:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:59.023 11:10:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:59.023 11:10:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:59.023 11:10:19 -- scripts/common.sh@335 -- # IFS=.-: 00:15:59.023 11:10:19 -- scripts/common.sh@335 -- # read -ra ver1 00:15:59.023 11:10:19 -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.023 11:10:19 -- scripts/common.sh@336 -- # read -ra ver2 00:15:59.023 11:10:19 -- scripts/common.sh@337 -- # local 'op=<' 00:15:59.023 11:10:19 -- scripts/common.sh@339 -- # ver1_l=2 00:15:59.023 11:10:19 -- scripts/common.sh@340 -- # ver2_l=1 00:15:59.023 11:10:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:59.023 11:10:19 -- scripts/common.sh@343 -- # case "$op" in 00:15:59.023 11:10:19 -- scripts/common.sh@344 -- # : 1 00:15:59.023 11:10:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:59.023 11:10:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.023 11:10:19 -- scripts/common.sh@364 -- # decimal 1 00:15:59.023 11:10:19 -- scripts/common.sh@352 -- # local d=1 00:15:59.023 11:10:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.023 11:10:19 -- scripts/common.sh@354 -- # echo 1 00:15:59.023 11:10:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:59.023 11:10:19 -- scripts/common.sh@365 -- # decimal 2 00:15:59.023 11:10:19 -- scripts/common.sh@352 -- # local d=2 00:15:59.023 11:10:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.023 11:10:19 -- scripts/common.sh@354 -- # echo 2 00:15:59.023 11:10:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:59.023 11:10:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:59.023 11:10:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:59.023 11:10:19 -- scripts/common.sh@367 -- # return 0 00:15:59.023 11:10:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.023 11:10:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:59.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.023 --rc genhtml_branch_coverage=1 00:15:59.023 --rc genhtml_function_coverage=1 00:15:59.023 --rc genhtml_legend=1 00:15:59.023 --rc geninfo_all_blocks=1 00:15:59.023 --rc geninfo_unexecuted_blocks=1 00:15:59.023 00:15:59.023 ' 00:15:59.023 11:10:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:59.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.023 --rc genhtml_branch_coverage=1 00:15:59.023 --rc genhtml_function_coverage=1 00:15:59.023 --rc genhtml_legend=1 00:15:59.023 --rc geninfo_all_blocks=1 00:15:59.023 --rc geninfo_unexecuted_blocks=1 00:15:59.023 00:15:59.023 ' 00:15:59.023 11:10:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:59.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.023 --rc genhtml_branch_coverage=1 00:15:59.023 --rc genhtml_function_coverage=1 00:15:59.023 --rc genhtml_legend=1 00:15:59.023 --rc geninfo_all_blocks=1 00:15:59.023 --rc geninfo_unexecuted_blocks=1 00:15:59.023 00:15:59.023 ' 00:15:59.023 11:10:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:59.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.023 --rc genhtml_branch_coverage=1 00:15:59.023 --rc genhtml_function_coverage=1 00:15:59.023 --rc genhtml_legend=1 00:15:59.023 --rc geninfo_all_blocks=1 00:15:59.023 --rc geninfo_unexecuted_blocks=1 00:15:59.023 00:15:59.023 ' 00:15:59.023 11:10:19 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.023 11:10:19 -- nvmf/common.sh@7 -- # uname -s 00:15:59.023 11:10:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.023 11:10:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.023 11:10:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.023 11:10:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.023 11:10:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.023 11:10:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.023 11:10:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.023 11:10:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.023 11:10:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.023 11:10:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.023 11:10:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:59.023 11:10:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:59.023 11:10:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.023 11:10:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.023 11:10:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.023 11:10:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:59.023 11:10:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.023 11:10:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.023 11:10:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.023 11:10:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.023 11:10:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.023 11:10:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.023 11:10:19 -- paths/export.sh@5 -- # export PATH 00:15:59.023 11:10:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.023 11:10:19 -- nvmf/common.sh@46 -- # : 0 00:15:59.023 11:10:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:59.023 11:10:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:59.023 11:10:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:59.023 11:10:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.023 11:10:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.023 11:10:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:59.023 11:10:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:59.023 11:10:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:59.023 11:10:19 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:59.023 11:10:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:59.023 11:10:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.023 11:10:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:59.023 11:10:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:59.023 11:10:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:59.023 11:10:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.023 11:10:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.023 11:10:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.023 11:10:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:59.023 11:10:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:59.023 11:10:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:59.023 11:10:19 -- common/autotest_common.sh@10 -- # set +x 00:16:05.593 11:10:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:05.593 11:10:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:05.593 11:10:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:05.593 11:10:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:05.593 11:10:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:05.593 11:10:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:05.593 11:10:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:05.593 11:10:25 -- nvmf/common.sh@294 -- # net_devs=() 00:16:05.593 11:10:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:05.593 11:10:25 -- nvmf/common.sh@295 -- # e810=() 00:16:05.593 11:10:25 -- nvmf/common.sh@295 -- # local -ga e810 00:16:05.593 11:10:25 -- nvmf/common.sh@296 -- # x722=() 00:16:05.593 11:10:25 -- nvmf/common.sh@296 -- # local -ga x722 00:16:05.593 11:10:25 -- nvmf/common.sh@297 -- # mlx=() 00:16:05.593 11:10:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:05.593 11:10:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.593 11:10:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:05.593 11:10:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:05.593 11:10:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:05.593 11:10:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:05.593 11:10:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:05.593 11:10:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:05.593 11:10:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:05.593 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:05.593 11:10:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:05.593 11:10:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:05.593 11:10:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:05.593 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:05.593 11:10:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:05.593 11:10:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:05.593 11:10:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:05.593 11:10:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.593 11:10:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:05.593 11:10:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.593 11:10:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:05.593 Found net devices under 0000:18:00.0: mlx_0_0 00:16:05.593 11:10:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.593 11:10:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:05.593 11:10:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.593 11:10:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:05.593 11:10:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.593 11:10:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:05.593 Found net devices under 0000:18:00.1: mlx_0_1 00:16:05.593 11:10:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.593 11:10:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:05.593 11:10:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:05.593 11:10:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:05.593 11:10:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:05.593 11:10:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:05.593 11:10:25 -- nvmf/common.sh@57 -- # uname 00:16:05.593 11:10:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:05.593 11:10:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:05.593 11:10:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:05.593 11:10:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:05.593 11:10:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:05.593 11:10:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:05.593 11:10:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:05.593 11:10:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:05.593 11:10:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:05.593 11:10:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:05.593 11:10:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:05.593 11:10:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:05.593 11:10:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:05.593 11:10:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:05.593 11:10:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:05.593 11:10:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:05.594 11:10:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@104 -- # continue 2 00:16:05.594 11:10:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@104 -- # continue 2 00:16:05.594 11:10:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:05.594 11:10:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:05.594 11:10:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:05.594 11:10:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:05.594 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:05.594 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:05.594 altname enp24s0f0np0 00:16:05.594 altname ens785f0np0 00:16:05.594 inet 192.168.100.8/24 scope global mlx_0_0 00:16:05.594 valid_lft forever preferred_lft forever 00:16:05.594 11:10:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:05.594 11:10:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:05.594 11:10:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:05.594 11:10:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:05.594 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:05.594 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:05.594 altname enp24s0f1np1 00:16:05.594 altname ens785f1np1 00:16:05.594 inet 192.168.100.9/24 scope global mlx_0_1 00:16:05.594 valid_lft forever preferred_lft forever 00:16:05.594 11:10:25 -- nvmf/common.sh@410 -- # return 0 00:16:05.594 11:10:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:05.594 11:10:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:05.594 11:10:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:05.594 11:10:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:05.594 11:10:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:05.594 11:10:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:05.594 11:10:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:05.594 11:10:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:05.594 11:10:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:05.594 11:10:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@104 -- # continue 2 00:16:05.594 11:10:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.594 11:10:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:05.594 11:10:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@104 -- # continue 2 00:16:05.594 11:10:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:05.594 11:10:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:05.594 11:10:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:05.594 11:10:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:05.594 11:10:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:05.594 11:10:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:05.594 192.168.100.9' 00:16:05.594 11:10:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:05.594 192.168.100.9' 00:16:05.594 11:10:25 -- nvmf/common.sh@445 -- # head -n 1 00:16:05.594 11:10:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:05.594 11:10:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:05.594 192.168.100.9' 00:16:05.594 11:10:25 -- nvmf/common.sh@446 -- # head -n 1 00:16:05.594 11:10:25 -- nvmf/common.sh@446 -- # tail -n +2 00:16:05.594 11:10:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:05.594 11:10:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:05.594 11:10:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:05.594 11:10:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:05.594 11:10:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:05.594 11:10:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:05.594 11:10:25 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:05.594 11:10:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:05.594 11:10:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:05.594 11:10:25 -- common/autotest_common.sh@10 -- # set +x 00:16:05.594 11:10:25 -- nvmf/common.sh@469 -- # nvmfpid=1588680 00:16:05.594 11:10:25 -- nvmf/common.sh@470 -- # waitforlisten 1588680 00:16:05.594 11:10:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:05.594 11:10:25 -- common/autotest_common.sh@829 -- # '[' -z 1588680 ']' 00:16:05.594 11:10:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.594 11:10:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.594 11:10:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.594 11:10:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.594 11:10:25 -- common/autotest_common.sh@10 -- # set +x 00:16:05.594 [2024-12-13 11:10:25.413151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:05.594 [2024-12-13 11:10:25.413201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.594 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.594 [2024-12-13 11:10:25.469002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.594 [2024-12-13 11:10:25.538591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:05.594 [2024-12-13 11:10:25.538712] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.594 [2024-12-13 11:10:25.538720] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.594 [2024-12-13 11:10:25.538726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.594 [2024-12-13 11:10:25.538849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.594 [2024-12-13 11:10:25.538934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.594 [2024-12-13 11:10:25.538935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.853 11:10:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.853 11:10:26 -- common/autotest_common.sh@862 -- # return 0 00:16:05.853 11:10:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:05.853 11:10:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.853 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.853 11:10:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.853 11:10:26 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:05.853 11:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.853 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.853 [2024-12-13 11:10:26.271665] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24a5140/0x24a9630) succeed. 00:16:05.853 [2024-12-13 11:10:26.279599] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24a6690/0x24eacd0) succeed. 00:16:05.853 11:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.853 11:10:26 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:05.853 11:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.853 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.853 11:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.853 11:10:26 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:05.853 11:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.853 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.853 [2024-12-13 11:10:26.390544] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:05.853 11:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.853 11:10:26 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:05.853 11:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.853 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:05.853 NULL1 00:16:05.853 11:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.853 11:10:26 -- target/connect_stress.sh@21 -- # PERF_PID=1588919 00:16:05.853 11:10:26 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:05.854 11:10:26 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:05.854 11:10:26 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:05.854 11:10:26 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:05.854 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:05.854 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:05.854 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:05.854 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:06.113 11:10:26 -- target/connect_stress.sh@28 -- # cat 00:16:06.113 11:10:26 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:06.113 11:10:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.113 11:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.113 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:06.372 11:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.372 11:10:26 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:06.372 11:10:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.372 11:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.372 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:06.631 11:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.631 11:10:27 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:06.631 11:10:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.631 11:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.631 11:10:27 -- common/autotest_common.sh@10 -- # set +x 00:16:07.198 11:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.198 11:10:27 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:07.198 11:10:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.198 11:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.198 11:10:27 -- common/autotest_common.sh@10 -- # set +x 00:16:07.457 11:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.457 11:10:27 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:07.457 11:10:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.457 11:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.457 11:10:27 -- common/autotest_common.sh@10 -- # set +x 00:16:07.716 11:10:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.716 11:10:28 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:07.716 11:10:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.716 11:10:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.716 11:10:28 -- common/autotest_common.sh@10 -- # set +x 00:16:07.974 11:10:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.974 11:10:28 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:07.974 11:10:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.974 11:10:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.974 11:10:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.302 11:10:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.302 11:10:28 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:08.302 11:10:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.302 11:10:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.302 11:10:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.655 11:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.655 11:10:29 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:08.655 11:10:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.655 11:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.655 11:10:29 -- common/autotest_common.sh@10 -- # set +x 00:16:08.919 11:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.919 11:10:29 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:08.919 11:10:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.919 11:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.919 11:10:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.178 11:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.178 11:10:29 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:09.178 11:10:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.178 11:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.178 11:10:29 -- common/autotest_common.sh@10 -- # set +x 00:16:09.746 11:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.746 11:10:30 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:09.746 11:10:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.746 11:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.746 11:10:30 -- common/autotest_common.sh@10 -- # set +x 00:16:10.004 11:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.004 11:10:30 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:10.004 11:10:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.004 11:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.004 11:10:30 -- common/autotest_common.sh@10 -- # set +x 00:16:10.263 11:10:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.263 11:10:30 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:10.263 11:10:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.263 11:10:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.263 11:10:30 -- common/autotest_common.sh@10 -- # set +x 00:16:10.522 11:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.522 11:10:31 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:10.522 11:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.522 11:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.522 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:16:10.781 11:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.781 11:10:31 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:10.781 11:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.781 11:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.781 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.349 11:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.349 11:10:31 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:11.349 11:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.349 11:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.349 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.608 11:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.608 11:10:31 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:11.608 11:10:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.608 11:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.608 11:10:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.866 11:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.866 11:10:32 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:11.866 11:10:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.866 11:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.866 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.125 11:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.125 11:10:32 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:12.125 11:10:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.125 11:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.125 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.693 11:10:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.693 11:10:32 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:12.693 11:10:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.693 11:10:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.693 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:16:12.952 11:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.952 11:10:33 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:12.952 11:10:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.952 11:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.952 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.210 11:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.210 11:10:33 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:13.210 11:10:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.210 11:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.210 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.469 11:10:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.469 11:10:33 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:13.469 11:10:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.469 11:10:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.469 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.728 11:10:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.728 11:10:34 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:13.728 11:10:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.728 11:10:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.728 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.296 11:10:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.296 11:10:34 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:14.296 11:10:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.296 11:10:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.296 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.555 11:10:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.555 11:10:34 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:14.555 11:10:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.555 11:10:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.555 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.814 11:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.814 11:10:35 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:14.814 11:10:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.814 11:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.814 11:10:35 -- common/autotest_common.sh@10 -- # set +x 00:16:15.072 11:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.072 11:10:35 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:15.072 11:10:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.072 11:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.072 11:10:35 -- common/autotest_common.sh@10 -- # set +x 00:16:15.640 11:10:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.640 11:10:35 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:15.640 11:10:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.640 11:10:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.640 11:10:35 -- common/autotest_common.sh@10 -- # set +x 00:16:15.899 11:10:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.899 11:10:36 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:15.899 11:10:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.899 11:10:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.899 11:10:36 -- common/autotest_common.sh@10 -- # set +x 00:16:16.159 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.159 11:10:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.159 11:10:36 -- target/connect_stress.sh@34 -- # kill -0 1588919 00:16:16.159 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1588919) - No such process 00:16:16.159 11:10:36 -- target/connect_stress.sh@38 -- # wait 1588919 00:16:16.159 11:10:36 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:16.159 11:10:36 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:16.159 11:10:36 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:16.159 11:10:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.159 11:10:36 -- nvmf/common.sh@116 -- # sync 00:16:16.159 11:10:36 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:16.159 11:10:36 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:16.159 11:10:36 -- nvmf/common.sh@119 -- # set +e 00:16:16.159 11:10:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.159 11:10:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:16.159 rmmod nvme_rdma 00:16:16.159 rmmod nvme_fabrics 00:16:16.159 11:10:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.159 11:10:36 -- nvmf/common.sh@123 -- # set -e 00:16:16.159 11:10:36 -- nvmf/common.sh@124 -- # return 0 00:16:16.159 11:10:36 -- nvmf/common.sh@477 -- # '[' -n 1588680 ']' 00:16:16.159 11:10:36 -- nvmf/common.sh@478 -- # killprocess 1588680 00:16:16.159 11:10:36 -- common/autotest_common.sh@936 -- # '[' -z 1588680 ']' 00:16:16.159 11:10:36 -- common/autotest_common.sh@940 -- # kill -0 1588680 00:16:16.159 11:10:36 -- common/autotest_common.sh@941 -- # uname 00:16:16.159 11:10:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.159 11:10:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1588680 00:16:16.159 11:10:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.159 11:10:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.159 11:10:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1588680' 00:16:16.159 killing process with pid 1588680 00:16:16.159 11:10:36 -- common/autotest_common.sh@955 -- # kill 1588680 00:16:16.159 11:10:36 -- common/autotest_common.sh@960 -- # wait 1588680 00:16:16.418 11:10:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.418 11:10:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:16.418 00:16:16.418 real 0m17.535s 00:16:16.418 user 0m41.607s 00:16:16.418 sys 0m6.209s 00:16:16.418 11:10:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.418 11:10:36 -- common/autotest_common.sh@10 -- # set +x 00:16:16.418 ************************************ 00:16:16.418 END TEST nvmf_connect_stress 00:16:16.418 ************************************ 00:16:16.418 11:10:36 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:16.418 11:10:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.418 11:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.418 11:10:36 -- common/autotest_common.sh@10 -- # set +x 00:16:16.418 ************************************ 00:16:16.418 START TEST nvmf_fused_ordering 00:16:16.418 ************************************ 00:16:16.418 11:10:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:16.678 * Looking for test storage... 00:16:16.678 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:16.678 11:10:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:16.678 11:10:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:16.678 11:10:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:16.678 11:10:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:16.678 11:10:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:16.678 11:10:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:16.678 11:10:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:16.678 11:10:37 -- scripts/common.sh@335 -- # IFS=.-: 00:16:16.678 11:10:37 -- scripts/common.sh@335 -- # read -ra ver1 00:16:16.678 11:10:37 -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.678 11:10:37 -- scripts/common.sh@336 -- # read -ra ver2 00:16:16.678 11:10:37 -- scripts/common.sh@337 -- # local 'op=<' 00:16:16.678 11:10:37 -- scripts/common.sh@339 -- # ver1_l=2 00:16:16.678 11:10:37 -- scripts/common.sh@340 -- # ver2_l=1 00:16:16.678 11:10:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:16.678 11:10:37 -- scripts/common.sh@343 -- # case "$op" in 00:16:16.678 11:10:37 -- scripts/common.sh@344 -- # : 1 00:16:16.678 11:10:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:16.678 11:10:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.678 11:10:37 -- scripts/common.sh@364 -- # decimal 1 00:16:16.678 11:10:37 -- scripts/common.sh@352 -- # local d=1 00:16:16.678 11:10:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.678 11:10:37 -- scripts/common.sh@354 -- # echo 1 00:16:16.678 11:10:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:16.678 11:10:37 -- scripts/common.sh@365 -- # decimal 2 00:16:16.678 11:10:37 -- scripts/common.sh@352 -- # local d=2 00:16:16.678 11:10:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.678 11:10:37 -- scripts/common.sh@354 -- # echo 2 00:16:16.678 11:10:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:16.678 11:10:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:16.678 11:10:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:16.678 11:10:37 -- scripts/common.sh@367 -- # return 0 00:16:16.678 11:10:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.678 11:10:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.678 --rc genhtml_branch_coverage=1 00:16:16.678 --rc genhtml_function_coverage=1 00:16:16.678 --rc genhtml_legend=1 00:16:16.678 --rc geninfo_all_blocks=1 00:16:16.678 --rc geninfo_unexecuted_blocks=1 00:16:16.678 00:16:16.678 ' 00:16:16.678 11:10:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.678 --rc genhtml_branch_coverage=1 00:16:16.678 --rc genhtml_function_coverage=1 00:16:16.678 --rc genhtml_legend=1 00:16:16.678 --rc geninfo_all_blocks=1 00:16:16.678 --rc geninfo_unexecuted_blocks=1 00:16:16.678 00:16:16.678 ' 00:16:16.678 11:10:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.678 --rc genhtml_branch_coverage=1 00:16:16.678 --rc genhtml_function_coverage=1 00:16:16.678 --rc genhtml_legend=1 00:16:16.678 --rc geninfo_all_blocks=1 00:16:16.678 --rc geninfo_unexecuted_blocks=1 00:16:16.678 00:16:16.678 ' 00:16:16.678 11:10:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.678 --rc genhtml_branch_coverage=1 00:16:16.678 --rc genhtml_function_coverage=1 00:16:16.678 --rc genhtml_legend=1 00:16:16.678 --rc geninfo_all_blocks=1 00:16:16.678 --rc geninfo_unexecuted_blocks=1 00:16:16.678 00:16:16.678 ' 00:16:16.678 11:10:37 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.678 11:10:37 -- nvmf/common.sh@7 -- # uname -s 00:16:16.678 11:10:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.678 11:10:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.678 11:10:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.678 11:10:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.678 11:10:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.678 11:10:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.678 11:10:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.678 11:10:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.678 11:10:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.678 11:10:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.678 11:10:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:16.678 11:10:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:16.678 11:10:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.678 11:10:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.678 11:10:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.678 11:10:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:16.678 11:10:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.678 11:10:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.678 11:10:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.678 11:10:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.678 11:10:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.679 11:10:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.679 11:10:37 -- paths/export.sh@5 -- # export PATH 00:16:16.679 11:10:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.679 11:10:37 -- nvmf/common.sh@46 -- # : 0 00:16:16.679 11:10:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.679 11:10:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.679 11:10:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.679 11:10:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.679 11:10:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.679 11:10:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.679 11:10:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.679 11:10:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.679 11:10:37 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:16.679 11:10:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:16.679 11:10:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.679 11:10:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.679 11:10:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.679 11:10:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.679 11:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.679 11:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.679 11:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.679 11:10:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:16.679 11:10:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:16.679 11:10:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:16.679 11:10:37 -- common/autotest_common.sh@10 -- # set +x 00:16:21.952 11:10:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:21.952 11:10:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:21.952 11:10:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:21.952 11:10:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:21.952 11:10:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:21.952 11:10:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:21.952 11:10:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:21.952 11:10:42 -- nvmf/common.sh@294 -- # net_devs=() 00:16:21.952 11:10:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:21.952 11:10:42 -- nvmf/common.sh@295 -- # e810=() 00:16:21.952 11:10:42 -- nvmf/common.sh@295 -- # local -ga e810 00:16:21.952 11:10:42 -- nvmf/common.sh@296 -- # x722=() 00:16:21.952 11:10:42 -- nvmf/common.sh@296 -- # local -ga x722 00:16:21.952 11:10:42 -- nvmf/common.sh@297 -- # mlx=() 00:16:21.952 11:10:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:21.952 11:10:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.952 11:10:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:21.952 11:10:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:21.952 11:10:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:21.952 11:10:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:21.952 11:10:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:21.952 11:10:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:21.952 11:10:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:21.952 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:21.952 11:10:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:21.952 11:10:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:21.952 11:10:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:21.952 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:21.952 11:10:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:21.952 11:10:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:21.952 11:10:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:21.952 11:10:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.952 11:10:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:21.952 11:10:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.952 11:10:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:21.952 Found net devices under 0000:18:00.0: mlx_0_0 00:16:21.952 11:10:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.952 11:10:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:21.952 11:10:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.952 11:10:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:21.952 11:10:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.952 11:10:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:21.952 Found net devices under 0000:18:00.1: mlx_0_1 00:16:21.952 11:10:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.952 11:10:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:21.952 11:10:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:21.952 11:10:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:21.952 11:10:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:21.953 11:10:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:21.953 11:10:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:21.953 11:10:42 -- nvmf/common.sh@57 -- # uname 00:16:21.953 11:10:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:21.953 11:10:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:21.953 11:10:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:21.953 11:10:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:21.953 11:10:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:21.953 11:10:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:21.953 11:10:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:21.953 11:10:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:21.953 11:10:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:21.953 11:10:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:21.953 11:10:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:21.953 11:10:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:21.953 11:10:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:21.953 11:10:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:21.953 11:10:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:21.953 11:10:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:21.953 11:10:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:21.953 11:10:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:21.953 11:10:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:21.953 11:10:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:21.953 11:10:42 -- nvmf/common.sh@104 -- # continue 2 00:16:21.953 11:10:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:21.953 11:10:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:21.953 11:10:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:21.953 11:10:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:21.953 11:10:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:21.953 11:10:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:21.953 11:10:42 -- nvmf/common.sh@104 -- # continue 2 00:16:21.953 11:10:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:21.953 11:10:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:21.953 11:10:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:21.953 11:10:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:21.953 11:10:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:21.953 11:10:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:22.213 11:10:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:22.213 11:10:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:22.213 11:10:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:22.213 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:22.213 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:22.213 altname enp24s0f0np0 00:16:22.213 altname ens785f0np0 00:16:22.213 inet 192.168.100.8/24 scope global mlx_0_0 00:16:22.213 valid_lft forever preferred_lft forever 00:16:22.213 11:10:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:22.213 11:10:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:22.213 11:10:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:22.213 11:10:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:22.213 11:10:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:22.213 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:22.213 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:22.213 altname enp24s0f1np1 00:16:22.213 altname ens785f1np1 00:16:22.213 inet 192.168.100.9/24 scope global mlx_0_1 00:16:22.213 valid_lft forever preferred_lft forever 00:16:22.213 11:10:42 -- nvmf/common.sh@410 -- # return 0 00:16:22.213 11:10:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:22.213 11:10:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:22.213 11:10:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:22.213 11:10:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:22.213 11:10:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:22.213 11:10:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:22.213 11:10:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:22.213 11:10:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:22.213 11:10:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:22.213 11:10:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:22.213 11:10:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:22.213 11:10:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.213 11:10:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:22.213 11:10:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:22.213 11:10:42 -- nvmf/common.sh@104 -- # continue 2 00:16:22.213 11:10:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:22.213 11:10:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.213 11:10:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:22.213 11:10:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.213 11:10:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:22.213 11:10:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@104 -- # continue 2 00:16:22.213 11:10:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:22.213 11:10:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:22.213 11:10:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:22.213 11:10:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:22.213 11:10:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:22.213 11:10:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:22.213 11:10:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:22.213 192.168.100.9' 00:16:22.213 11:10:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:22.213 192.168.100.9' 00:16:22.213 11:10:42 -- nvmf/common.sh@445 -- # head -n 1 00:16:22.213 11:10:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:22.213 11:10:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:22.213 192.168.100.9' 00:16:22.213 11:10:42 -- nvmf/common.sh@446 -- # tail -n +2 00:16:22.213 11:10:42 -- nvmf/common.sh@446 -- # head -n 1 00:16:22.213 11:10:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:22.213 11:10:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:22.213 11:10:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:22.213 11:10:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:22.213 11:10:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:22.213 11:10:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:22.213 11:10:42 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:22.213 11:10:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.213 11:10:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.213 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:16:22.213 11:10:42 -- nvmf/common.sh@469 -- # nvmfpid=1594006 00:16:22.213 11:10:42 -- nvmf/common.sh@470 -- # waitforlisten 1594006 00:16:22.213 11:10:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.213 11:10:42 -- common/autotest_common.sh@829 -- # '[' -z 1594006 ']' 00:16:22.213 11:10:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.213 11:10:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.213 11:10:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.213 11:10:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.213 11:10:42 -- common/autotest_common.sh@10 -- # set +x 00:16:22.213 [2024-12-13 11:10:42.694779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:22.213 [2024-12-13 11:10:42.694829] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.213 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.213 [2024-12-13 11:10:42.746958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.472 [2024-12-13 11:10:42.820861] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:22.472 [2024-12-13 11:10:42.820960] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.472 [2024-12-13 11:10:42.820966] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.472 [2024-12-13 11:10:42.820972] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.472 [2024-12-13 11:10:42.820988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.041 11:10:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.041 11:10:43 -- common/autotest_common.sh@862 -- # return 0 00:16:23.041 11:10:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.041 11:10:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.041 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.041 11:10:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.041 11:10:43 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:23.041 11:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.041 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.041 [2024-12-13 11:10:43.540639] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13c3af0/0x13c7fe0) succeed. 00:16:23.041 [2024-12-13 11:10:43.548714] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13c4ff0/0x1409680) succeed. 00:16:23.041 11:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.041 11:10:43 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:23.041 11:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.041 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.041 11:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.041 11:10:43 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:23.041 11:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.041 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.041 [2024-12-13 11:10:43.607433] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:23.300 11:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.300 11:10:43 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:23.300 11:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.300 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.300 NULL1 00:16:23.300 11:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.300 11:10:43 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:23.300 11:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.300 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.300 11:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.300 11:10:43 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:23.300 11:10:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.300 11:10:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.300 11:10:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.300 11:10:43 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:23.300 [2024-12-13 11:10:43.659389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:23.300 [2024-12-13 11:10:43.659419] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594150 ] 00:16:23.300 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.300 Attached to nqn.2016-06.io.spdk:cnode1 00:16:23.300 Namespace ID: 1 size: 1GB 00:16:23.300 fused_ordering(0) 00:16:23.300 fused_ordering(1) 00:16:23.300 fused_ordering(2) 00:16:23.300 fused_ordering(3) 00:16:23.300 fused_ordering(4) 00:16:23.300 fused_ordering(5) 00:16:23.300 fused_ordering(6) 00:16:23.300 fused_ordering(7) 00:16:23.300 fused_ordering(8) 00:16:23.300 fused_ordering(9) 00:16:23.300 fused_ordering(10) 00:16:23.300 fused_ordering(11) 00:16:23.300 fused_ordering(12) 00:16:23.300 fused_ordering(13) 00:16:23.300 fused_ordering(14) 00:16:23.301 fused_ordering(15) 00:16:23.301 fused_ordering(16) 00:16:23.301 fused_ordering(17) 00:16:23.301 fused_ordering(18) 00:16:23.301 fused_ordering(19) 00:16:23.301 fused_ordering(20) 00:16:23.301 fused_ordering(21) 00:16:23.301 fused_ordering(22) 00:16:23.301 fused_ordering(23) 00:16:23.301 fused_ordering(24) 00:16:23.301 fused_ordering(25) 00:16:23.301 fused_ordering(26) 00:16:23.301 fused_ordering(27) 00:16:23.301 fused_ordering(28) 00:16:23.301 fused_ordering(29) 00:16:23.301 fused_ordering(30) 00:16:23.301 fused_ordering(31) 00:16:23.301 fused_ordering(32) 00:16:23.301 fused_ordering(33) 00:16:23.301 fused_ordering(34) 00:16:23.301 fused_ordering(35) 00:16:23.301 fused_ordering(36) 00:16:23.301 fused_ordering(37) 00:16:23.301 fused_ordering(38) 00:16:23.301 fused_ordering(39) 00:16:23.301 fused_ordering(40) 00:16:23.301 fused_ordering(41) 00:16:23.301 fused_ordering(42) 00:16:23.301 fused_ordering(43) 00:16:23.301 fused_ordering(44) 00:16:23.301 fused_ordering(45) 00:16:23.301 fused_ordering(46) 00:16:23.301 fused_ordering(47) 00:16:23.301 fused_ordering(48) 00:16:23.301 fused_ordering(49) 00:16:23.301 fused_ordering(50) 00:16:23.301 fused_ordering(51) 00:16:23.301 fused_ordering(52) 00:16:23.301 fused_ordering(53) 00:16:23.301 fused_ordering(54) 00:16:23.301 fused_ordering(55) 00:16:23.301 fused_ordering(56) 00:16:23.301 fused_ordering(57) 00:16:23.301 fused_ordering(58) 00:16:23.301 fused_ordering(59) 00:16:23.301 fused_ordering(60) 00:16:23.301 fused_ordering(61) 00:16:23.301 fused_ordering(62) 00:16:23.301 fused_ordering(63) 00:16:23.301 fused_ordering(64) 00:16:23.301 fused_ordering(65) 00:16:23.301 fused_ordering(66) 00:16:23.301 fused_ordering(67) 00:16:23.301 fused_ordering(68) 00:16:23.301 fused_ordering(69) 00:16:23.301 fused_ordering(70) 00:16:23.301 fused_ordering(71) 00:16:23.301 fused_ordering(72) 00:16:23.301 fused_ordering(73) 00:16:23.301 fused_ordering(74) 00:16:23.301 fused_ordering(75) 00:16:23.301 fused_ordering(76) 00:16:23.301 fused_ordering(77) 00:16:23.301 fused_ordering(78) 00:16:23.301 fused_ordering(79) 00:16:23.301 fused_ordering(80) 00:16:23.301 fused_ordering(81) 00:16:23.301 fused_ordering(82) 00:16:23.301 fused_ordering(83) 00:16:23.301 fused_ordering(84) 00:16:23.301 fused_ordering(85) 00:16:23.301 fused_ordering(86) 00:16:23.301 fused_ordering(87) 00:16:23.301 fused_ordering(88) 00:16:23.301 fused_ordering(89) 00:16:23.301 fused_ordering(90) 00:16:23.301 fused_ordering(91) 00:16:23.301 fused_ordering(92) 00:16:23.301 fused_ordering(93) 00:16:23.301 fused_ordering(94) 00:16:23.301 fused_ordering(95) 00:16:23.301 fused_ordering(96) 00:16:23.301 fused_ordering(97) 00:16:23.301 fused_ordering(98) 00:16:23.301 fused_ordering(99) 00:16:23.301 fused_ordering(100) 00:16:23.301 fused_ordering(101) 00:16:23.301 fused_ordering(102) 00:16:23.301 fused_ordering(103) 00:16:23.301 fused_ordering(104) 00:16:23.301 fused_ordering(105) 00:16:23.301 fused_ordering(106) 00:16:23.301 fused_ordering(107) 00:16:23.301 fused_ordering(108) 00:16:23.301 fused_ordering(109) 00:16:23.301 fused_ordering(110) 00:16:23.301 fused_ordering(111) 00:16:23.301 fused_ordering(112) 00:16:23.301 fused_ordering(113) 00:16:23.301 fused_ordering(114) 00:16:23.301 fused_ordering(115) 00:16:23.301 fused_ordering(116) 00:16:23.301 fused_ordering(117) 00:16:23.301 fused_ordering(118) 00:16:23.301 fused_ordering(119) 00:16:23.301 fused_ordering(120) 00:16:23.301 fused_ordering(121) 00:16:23.301 fused_ordering(122) 00:16:23.301 fused_ordering(123) 00:16:23.301 fused_ordering(124) 00:16:23.301 fused_ordering(125) 00:16:23.301 fused_ordering(126) 00:16:23.301 fused_ordering(127) 00:16:23.301 fused_ordering(128) 00:16:23.301 fused_ordering(129) 00:16:23.301 fused_ordering(130) 00:16:23.301 fused_ordering(131) 00:16:23.301 fused_ordering(132) 00:16:23.301 fused_ordering(133) 00:16:23.301 fused_ordering(134) 00:16:23.301 fused_ordering(135) 00:16:23.301 fused_ordering(136) 00:16:23.301 fused_ordering(137) 00:16:23.301 fused_ordering(138) 00:16:23.301 fused_ordering(139) 00:16:23.301 fused_ordering(140) 00:16:23.301 fused_ordering(141) 00:16:23.301 fused_ordering(142) 00:16:23.301 fused_ordering(143) 00:16:23.301 fused_ordering(144) 00:16:23.301 fused_ordering(145) 00:16:23.301 fused_ordering(146) 00:16:23.301 fused_ordering(147) 00:16:23.301 fused_ordering(148) 00:16:23.301 fused_ordering(149) 00:16:23.301 fused_ordering(150) 00:16:23.301 fused_ordering(151) 00:16:23.301 fused_ordering(152) 00:16:23.301 fused_ordering(153) 00:16:23.301 fused_ordering(154) 00:16:23.301 fused_ordering(155) 00:16:23.301 fused_ordering(156) 00:16:23.301 fused_ordering(157) 00:16:23.301 fused_ordering(158) 00:16:23.301 fused_ordering(159) 00:16:23.301 fused_ordering(160) 00:16:23.301 fused_ordering(161) 00:16:23.301 fused_ordering(162) 00:16:23.301 fused_ordering(163) 00:16:23.301 fused_ordering(164) 00:16:23.301 fused_ordering(165) 00:16:23.301 fused_ordering(166) 00:16:23.301 fused_ordering(167) 00:16:23.301 fused_ordering(168) 00:16:23.301 fused_ordering(169) 00:16:23.301 fused_ordering(170) 00:16:23.301 fused_ordering(171) 00:16:23.301 fused_ordering(172) 00:16:23.301 fused_ordering(173) 00:16:23.301 fused_ordering(174) 00:16:23.301 fused_ordering(175) 00:16:23.301 fused_ordering(176) 00:16:23.301 fused_ordering(177) 00:16:23.301 fused_ordering(178) 00:16:23.301 fused_ordering(179) 00:16:23.301 fused_ordering(180) 00:16:23.301 fused_ordering(181) 00:16:23.301 fused_ordering(182) 00:16:23.301 fused_ordering(183) 00:16:23.301 fused_ordering(184) 00:16:23.301 fused_ordering(185) 00:16:23.301 fused_ordering(186) 00:16:23.301 fused_ordering(187) 00:16:23.301 fused_ordering(188) 00:16:23.301 fused_ordering(189) 00:16:23.301 fused_ordering(190) 00:16:23.301 fused_ordering(191) 00:16:23.301 fused_ordering(192) 00:16:23.301 fused_ordering(193) 00:16:23.301 fused_ordering(194) 00:16:23.301 fused_ordering(195) 00:16:23.301 fused_ordering(196) 00:16:23.301 fused_ordering(197) 00:16:23.301 fused_ordering(198) 00:16:23.301 fused_ordering(199) 00:16:23.301 fused_ordering(200) 00:16:23.301 fused_ordering(201) 00:16:23.301 fused_ordering(202) 00:16:23.301 fused_ordering(203) 00:16:23.301 fused_ordering(204) 00:16:23.301 fused_ordering(205) 00:16:23.561 fused_ordering(206) 00:16:23.561 fused_ordering(207) 00:16:23.561 fused_ordering(208) 00:16:23.561 fused_ordering(209) 00:16:23.561 fused_ordering(210) 00:16:23.561 fused_ordering(211) 00:16:23.561 fused_ordering(212) 00:16:23.561 fused_ordering(213) 00:16:23.561 fused_ordering(214) 00:16:23.561 fused_ordering(215) 00:16:23.561 fused_ordering(216) 00:16:23.561 fused_ordering(217) 00:16:23.561 fused_ordering(218) 00:16:23.561 fused_ordering(219) 00:16:23.561 fused_ordering(220) 00:16:23.561 fused_ordering(221) 00:16:23.561 fused_ordering(222) 00:16:23.561 fused_ordering(223) 00:16:23.561 fused_ordering(224) 00:16:23.561 fused_ordering(225) 00:16:23.561 fused_ordering(226) 00:16:23.561 fused_ordering(227) 00:16:23.561 fused_ordering(228) 00:16:23.561 fused_ordering(229) 00:16:23.561 fused_ordering(230) 00:16:23.561 fused_ordering(231) 00:16:23.561 fused_ordering(232) 00:16:23.561 fused_ordering(233) 00:16:23.561 fused_ordering(234) 00:16:23.561 fused_ordering(235) 00:16:23.561 fused_ordering(236) 00:16:23.561 fused_ordering(237) 00:16:23.561 fused_ordering(238) 00:16:23.561 fused_ordering(239) 00:16:23.561 fused_ordering(240) 00:16:23.561 fused_ordering(241) 00:16:23.561 fused_ordering(242) 00:16:23.561 fused_ordering(243) 00:16:23.561 fused_ordering(244) 00:16:23.561 fused_ordering(245) 00:16:23.561 fused_ordering(246) 00:16:23.561 fused_ordering(247) 00:16:23.561 fused_ordering(248) 00:16:23.561 fused_ordering(249) 00:16:23.561 fused_ordering(250) 00:16:23.561 fused_ordering(251) 00:16:23.561 fused_ordering(252) 00:16:23.561 fused_ordering(253) 00:16:23.561 fused_ordering(254) 00:16:23.561 fused_ordering(255) 00:16:23.561 fused_ordering(256) 00:16:23.561 fused_ordering(257) 00:16:23.561 fused_ordering(258) 00:16:23.561 fused_ordering(259) 00:16:23.561 fused_ordering(260) 00:16:23.561 fused_ordering(261) 00:16:23.561 fused_ordering(262) 00:16:23.561 fused_ordering(263) 00:16:23.561 fused_ordering(264) 00:16:23.561 fused_ordering(265) 00:16:23.561 fused_ordering(266) 00:16:23.561 fused_ordering(267) 00:16:23.561 fused_ordering(268) 00:16:23.561 fused_ordering(269) 00:16:23.561 fused_ordering(270) 00:16:23.561 fused_ordering(271) 00:16:23.561 fused_ordering(272) 00:16:23.561 fused_ordering(273) 00:16:23.561 fused_ordering(274) 00:16:23.561 fused_ordering(275) 00:16:23.561 fused_ordering(276) 00:16:23.561 fused_ordering(277) 00:16:23.561 fused_ordering(278) 00:16:23.561 fused_ordering(279) 00:16:23.561 fused_ordering(280) 00:16:23.561 fused_ordering(281) 00:16:23.561 fused_ordering(282) 00:16:23.561 fused_ordering(283) 00:16:23.561 fused_ordering(284) 00:16:23.561 fused_ordering(285) 00:16:23.561 fused_ordering(286) 00:16:23.561 fused_ordering(287) 00:16:23.561 fused_ordering(288) 00:16:23.561 fused_ordering(289) 00:16:23.561 fused_ordering(290) 00:16:23.561 fused_ordering(291) 00:16:23.561 fused_ordering(292) 00:16:23.561 fused_ordering(293) 00:16:23.561 fused_ordering(294) 00:16:23.561 fused_ordering(295) 00:16:23.561 fused_ordering(296) 00:16:23.561 fused_ordering(297) 00:16:23.561 fused_ordering(298) 00:16:23.561 fused_ordering(299) 00:16:23.561 fused_ordering(300) 00:16:23.561 fused_ordering(301) 00:16:23.561 fused_ordering(302) 00:16:23.561 fused_ordering(303) 00:16:23.561 fused_ordering(304) 00:16:23.561 fused_ordering(305) 00:16:23.561 fused_ordering(306) 00:16:23.561 fused_ordering(307) 00:16:23.561 fused_ordering(308) 00:16:23.561 fused_ordering(309) 00:16:23.561 fused_ordering(310) 00:16:23.561 fused_ordering(311) 00:16:23.561 fused_ordering(312) 00:16:23.562 fused_ordering(313) 00:16:23.562 fused_ordering(314) 00:16:23.562 fused_ordering(315) 00:16:23.562 fused_ordering(316) 00:16:23.562 fused_ordering(317) 00:16:23.562 fused_ordering(318) 00:16:23.562 fused_ordering(319) 00:16:23.562 fused_ordering(320) 00:16:23.562 fused_ordering(321) 00:16:23.562 fused_ordering(322) 00:16:23.562 fused_ordering(323) 00:16:23.562 fused_ordering(324) 00:16:23.562 fused_ordering(325) 00:16:23.562 fused_ordering(326) 00:16:23.562 fused_ordering(327) 00:16:23.562 fused_ordering(328) 00:16:23.562 fused_ordering(329) 00:16:23.562 fused_ordering(330) 00:16:23.562 fused_ordering(331) 00:16:23.562 fused_ordering(332) 00:16:23.562 fused_ordering(333) 00:16:23.562 fused_ordering(334) 00:16:23.562 fused_ordering(335) 00:16:23.562 fused_ordering(336) 00:16:23.562 fused_ordering(337) 00:16:23.562 fused_ordering(338) 00:16:23.562 fused_ordering(339) 00:16:23.562 fused_ordering(340) 00:16:23.562 fused_ordering(341) 00:16:23.562 fused_ordering(342) 00:16:23.562 fused_ordering(343) 00:16:23.562 fused_ordering(344) 00:16:23.562 fused_ordering(345) 00:16:23.562 fused_ordering(346) 00:16:23.562 fused_ordering(347) 00:16:23.562 fused_ordering(348) 00:16:23.562 fused_ordering(349) 00:16:23.562 fused_ordering(350) 00:16:23.562 fused_ordering(351) 00:16:23.562 fused_ordering(352) 00:16:23.562 fused_ordering(353) 00:16:23.562 fused_ordering(354) 00:16:23.562 fused_ordering(355) 00:16:23.562 fused_ordering(356) 00:16:23.562 fused_ordering(357) 00:16:23.562 fused_ordering(358) 00:16:23.562 fused_ordering(359) 00:16:23.562 fused_ordering(360) 00:16:23.562 fused_ordering(361) 00:16:23.562 fused_ordering(362) 00:16:23.562 fused_ordering(363) 00:16:23.562 fused_ordering(364) 00:16:23.562 fused_ordering(365) 00:16:23.562 fused_ordering(366) 00:16:23.562 fused_ordering(367) 00:16:23.562 fused_ordering(368) 00:16:23.562 fused_ordering(369) 00:16:23.562 fused_ordering(370) 00:16:23.562 fused_ordering(371) 00:16:23.562 fused_ordering(372) 00:16:23.562 fused_ordering(373) 00:16:23.562 fused_ordering(374) 00:16:23.562 fused_ordering(375) 00:16:23.562 fused_ordering(376) 00:16:23.562 fused_ordering(377) 00:16:23.562 fused_ordering(378) 00:16:23.562 fused_ordering(379) 00:16:23.562 fused_ordering(380) 00:16:23.562 fused_ordering(381) 00:16:23.562 fused_ordering(382) 00:16:23.562 fused_ordering(383) 00:16:23.562 fused_ordering(384) 00:16:23.562 fused_ordering(385) 00:16:23.562 fused_ordering(386) 00:16:23.562 fused_ordering(387) 00:16:23.562 fused_ordering(388) 00:16:23.562 fused_ordering(389) 00:16:23.562 fused_ordering(390) 00:16:23.562 fused_ordering(391) 00:16:23.562 fused_ordering(392) 00:16:23.562 fused_ordering(393) 00:16:23.562 fused_ordering(394) 00:16:23.562 fused_ordering(395) 00:16:23.562 fused_ordering(396) 00:16:23.562 fused_ordering(397) 00:16:23.562 fused_ordering(398) 00:16:23.562 fused_ordering(399) 00:16:23.562 fused_ordering(400) 00:16:23.562 fused_ordering(401) 00:16:23.562 fused_ordering(402) 00:16:23.562 fused_ordering(403) 00:16:23.562 fused_ordering(404) 00:16:23.562 fused_ordering(405) 00:16:23.562 fused_ordering(406) 00:16:23.562 fused_ordering(407) 00:16:23.562 fused_ordering(408) 00:16:23.562 fused_ordering(409) 00:16:23.562 fused_ordering(410) 00:16:23.562 fused_ordering(411) 00:16:23.562 fused_ordering(412) 00:16:23.562 fused_ordering(413) 00:16:23.562 fused_ordering(414) 00:16:23.562 fused_ordering(415) 00:16:23.562 fused_ordering(416) 00:16:23.562 fused_ordering(417) 00:16:23.562 fused_ordering(418) 00:16:23.562 fused_ordering(419) 00:16:23.562 fused_ordering(420) 00:16:23.562 fused_ordering(421) 00:16:23.562 fused_ordering(422) 00:16:23.562 fused_ordering(423) 00:16:23.562 fused_ordering(424) 00:16:23.562 fused_ordering(425) 00:16:23.562 fused_ordering(426) 00:16:23.562 fused_ordering(427) 00:16:23.562 fused_ordering(428) 00:16:23.562 fused_ordering(429) 00:16:23.562 fused_ordering(430) 00:16:23.562 fused_ordering(431) 00:16:23.562 fused_ordering(432) 00:16:23.562 fused_ordering(433) 00:16:23.562 fused_ordering(434) 00:16:23.562 fused_ordering(435) 00:16:23.562 fused_ordering(436) 00:16:23.562 fused_ordering(437) 00:16:23.562 fused_ordering(438) 00:16:23.562 fused_ordering(439) 00:16:23.562 fused_ordering(440) 00:16:23.562 fused_ordering(441) 00:16:23.562 fused_ordering(442) 00:16:23.562 fused_ordering(443) 00:16:23.562 fused_ordering(444) 00:16:23.562 fused_ordering(445) 00:16:23.562 fused_ordering(446) 00:16:23.562 fused_ordering(447) 00:16:23.562 fused_ordering(448) 00:16:23.562 fused_ordering(449) 00:16:23.562 fused_ordering(450) 00:16:23.562 fused_ordering(451) 00:16:23.562 fused_ordering(452) 00:16:23.562 fused_ordering(453) 00:16:23.562 fused_ordering(454) 00:16:23.562 fused_ordering(455) 00:16:23.562 fused_ordering(456) 00:16:23.562 fused_ordering(457) 00:16:23.562 fused_ordering(458) 00:16:23.562 fused_ordering(459) 00:16:23.562 fused_ordering(460) 00:16:23.562 fused_ordering(461) 00:16:23.562 fused_ordering(462) 00:16:23.562 fused_ordering(463) 00:16:23.562 fused_ordering(464) 00:16:23.562 fused_ordering(465) 00:16:23.562 fused_ordering(466) 00:16:23.562 fused_ordering(467) 00:16:23.562 fused_ordering(468) 00:16:23.562 fused_ordering(469) 00:16:23.562 fused_ordering(470) 00:16:23.562 fused_ordering(471) 00:16:23.562 fused_ordering(472) 00:16:23.562 fused_ordering(473) 00:16:23.562 fused_ordering(474) 00:16:23.562 fused_ordering(475) 00:16:23.562 fused_ordering(476) 00:16:23.562 fused_ordering(477) 00:16:23.562 fused_ordering(478) 00:16:23.562 fused_ordering(479) 00:16:23.562 fused_ordering(480) 00:16:23.562 fused_ordering(481) 00:16:23.562 fused_ordering(482) 00:16:23.562 fused_ordering(483) 00:16:23.562 fused_ordering(484) 00:16:23.562 fused_ordering(485) 00:16:23.562 fused_ordering(486) 00:16:23.562 fused_ordering(487) 00:16:23.562 fused_ordering(488) 00:16:23.562 fused_ordering(489) 00:16:23.562 fused_ordering(490) 00:16:23.562 fused_ordering(491) 00:16:23.562 fused_ordering(492) 00:16:23.562 fused_ordering(493) 00:16:23.562 fused_ordering(494) 00:16:23.562 fused_ordering(495) 00:16:23.562 fused_ordering(496) 00:16:23.562 fused_ordering(497) 00:16:23.562 fused_ordering(498) 00:16:23.562 fused_ordering(499) 00:16:23.562 fused_ordering(500) 00:16:23.562 fused_ordering(501) 00:16:23.562 fused_ordering(502) 00:16:23.562 fused_ordering(503) 00:16:23.562 fused_ordering(504) 00:16:23.562 fused_ordering(505) 00:16:23.562 fused_ordering(506) 00:16:23.562 fused_ordering(507) 00:16:23.562 fused_ordering(508) 00:16:23.562 fused_ordering(509) 00:16:23.562 fused_ordering(510) 00:16:23.562 fused_ordering(511) 00:16:23.562 fused_ordering(512) 00:16:23.562 fused_ordering(513) 00:16:23.562 fused_ordering(514) 00:16:23.562 fused_ordering(515) 00:16:23.562 fused_ordering(516) 00:16:23.562 fused_ordering(517) 00:16:23.562 fused_ordering(518) 00:16:23.562 fused_ordering(519) 00:16:23.562 fused_ordering(520) 00:16:23.562 fused_ordering(521) 00:16:23.562 fused_ordering(522) 00:16:23.562 fused_ordering(523) 00:16:23.562 fused_ordering(524) 00:16:23.562 fused_ordering(525) 00:16:23.562 fused_ordering(526) 00:16:23.562 fused_ordering(527) 00:16:23.562 fused_ordering(528) 00:16:23.562 fused_ordering(529) 00:16:23.562 fused_ordering(530) 00:16:23.562 fused_ordering(531) 00:16:23.562 fused_ordering(532) 00:16:23.562 fused_ordering(533) 00:16:23.562 fused_ordering(534) 00:16:23.562 fused_ordering(535) 00:16:23.562 fused_ordering(536) 00:16:23.562 fused_ordering(537) 00:16:23.562 fused_ordering(538) 00:16:23.562 fused_ordering(539) 00:16:23.562 fused_ordering(540) 00:16:23.562 fused_ordering(541) 00:16:23.562 fused_ordering(542) 00:16:23.562 fused_ordering(543) 00:16:23.562 fused_ordering(544) 00:16:23.562 fused_ordering(545) 00:16:23.562 fused_ordering(546) 00:16:23.562 fused_ordering(547) 00:16:23.562 fused_ordering(548) 00:16:23.562 fused_ordering(549) 00:16:23.562 fused_ordering(550) 00:16:23.562 fused_ordering(551) 00:16:23.562 fused_ordering(552) 00:16:23.562 fused_ordering(553) 00:16:23.562 fused_ordering(554) 00:16:23.562 fused_ordering(555) 00:16:23.562 fused_ordering(556) 00:16:23.562 fused_ordering(557) 00:16:23.562 fused_ordering(558) 00:16:23.562 fused_ordering(559) 00:16:23.562 fused_ordering(560) 00:16:23.562 fused_ordering(561) 00:16:23.562 fused_ordering(562) 00:16:23.562 fused_ordering(563) 00:16:23.562 fused_ordering(564) 00:16:23.562 fused_ordering(565) 00:16:23.562 fused_ordering(566) 00:16:23.562 fused_ordering(567) 00:16:23.562 fused_ordering(568) 00:16:23.562 fused_ordering(569) 00:16:23.562 fused_ordering(570) 00:16:23.562 fused_ordering(571) 00:16:23.562 fused_ordering(572) 00:16:23.562 fused_ordering(573) 00:16:23.562 fused_ordering(574) 00:16:23.562 fused_ordering(575) 00:16:23.562 fused_ordering(576) 00:16:23.562 fused_ordering(577) 00:16:23.562 fused_ordering(578) 00:16:23.562 fused_ordering(579) 00:16:23.562 fused_ordering(580) 00:16:23.562 fused_ordering(581) 00:16:23.562 fused_ordering(582) 00:16:23.562 fused_ordering(583) 00:16:23.562 fused_ordering(584) 00:16:23.562 fused_ordering(585) 00:16:23.562 fused_ordering(586) 00:16:23.562 fused_ordering(587) 00:16:23.562 fused_ordering(588) 00:16:23.562 fused_ordering(589) 00:16:23.562 fused_ordering(590) 00:16:23.562 fused_ordering(591) 00:16:23.562 fused_ordering(592) 00:16:23.562 fused_ordering(593) 00:16:23.562 fused_ordering(594) 00:16:23.562 fused_ordering(595) 00:16:23.562 fused_ordering(596) 00:16:23.562 fused_ordering(597) 00:16:23.562 fused_ordering(598) 00:16:23.562 fused_ordering(599) 00:16:23.562 fused_ordering(600) 00:16:23.562 fused_ordering(601) 00:16:23.562 fused_ordering(602) 00:16:23.562 fused_ordering(603) 00:16:23.562 fused_ordering(604) 00:16:23.563 fused_ordering(605) 00:16:23.563 fused_ordering(606) 00:16:23.563 fused_ordering(607) 00:16:23.563 fused_ordering(608) 00:16:23.563 fused_ordering(609) 00:16:23.563 fused_ordering(610) 00:16:23.563 fused_ordering(611) 00:16:23.563 fused_ordering(612) 00:16:23.563 fused_ordering(613) 00:16:23.563 fused_ordering(614) 00:16:23.563 fused_ordering(615) 00:16:23.563 fused_ordering(616) 00:16:23.563 fused_ordering(617) 00:16:23.563 fused_ordering(618) 00:16:23.563 fused_ordering(619) 00:16:23.563 fused_ordering(620) 00:16:23.563 fused_ordering(621) 00:16:23.563 fused_ordering(622) 00:16:23.563 fused_ordering(623) 00:16:23.563 fused_ordering(624) 00:16:23.563 fused_ordering(625) 00:16:23.563 fused_ordering(626) 00:16:23.563 fused_ordering(627) 00:16:23.563 fused_ordering(628) 00:16:23.563 fused_ordering(629) 00:16:23.563 fused_ordering(630) 00:16:23.563 fused_ordering(631) 00:16:23.563 fused_ordering(632) 00:16:23.563 fused_ordering(633) 00:16:23.563 fused_ordering(634) 00:16:23.563 fused_ordering(635) 00:16:23.563 fused_ordering(636) 00:16:23.563 fused_ordering(637) 00:16:23.563 fused_ordering(638) 00:16:23.563 fused_ordering(639) 00:16:23.563 fused_ordering(640) 00:16:23.563 fused_ordering(641) 00:16:23.563 fused_ordering(642) 00:16:23.563 fused_ordering(643) 00:16:23.563 fused_ordering(644) 00:16:23.563 fused_ordering(645) 00:16:23.563 fused_ordering(646) 00:16:23.563 fused_ordering(647) 00:16:23.563 fused_ordering(648) 00:16:23.563 fused_ordering(649) 00:16:23.563 fused_ordering(650) 00:16:23.563 fused_ordering(651) 00:16:23.563 fused_ordering(652) 00:16:23.563 fused_ordering(653) 00:16:23.563 fused_ordering(654) 00:16:23.563 fused_ordering(655) 00:16:23.563 fused_ordering(656) 00:16:23.563 fused_ordering(657) 00:16:23.563 fused_ordering(658) 00:16:23.563 fused_ordering(659) 00:16:23.563 fused_ordering(660) 00:16:23.563 fused_ordering(661) 00:16:23.563 fused_ordering(662) 00:16:23.563 fused_ordering(663) 00:16:23.563 fused_ordering(664) 00:16:23.563 fused_ordering(665) 00:16:23.563 fused_ordering(666) 00:16:23.563 fused_ordering(667) 00:16:23.563 fused_ordering(668) 00:16:23.563 fused_ordering(669) 00:16:23.563 fused_ordering(670) 00:16:23.563 fused_ordering(671) 00:16:23.563 fused_ordering(672) 00:16:23.563 fused_ordering(673) 00:16:23.563 fused_ordering(674) 00:16:23.563 fused_ordering(675) 00:16:23.563 fused_ordering(676) 00:16:23.563 fused_ordering(677) 00:16:23.563 fused_ordering(678) 00:16:23.563 fused_ordering(679) 00:16:23.563 fused_ordering(680) 00:16:23.563 fused_ordering(681) 00:16:23.563 fused_ordering(682) 00:16:23.563 fused_ordering(683) 00:16:23.563 fused_ordering(684) 00:16:23.563 fused_ordering(685) 00:16:23.563 fused_ordering(686) 00:16:23.563 fused_ordering(687) 00:16:23.563 fused_ordering(688) 00:16:23.563 fused_ordering(689) 00:16:23.563 fused_ordering(690) 00:16:23.563 fused_ordering(691) 00:16:23.563 fused_ordering(692) 00:16:23.563 fused_ordering(693) 00:16:23.563 fused_ordering(694) 00:16:23.563 fused_ordering(695) 00:16:23.563 fused_ordering(696) 00:16:23.563 fused_ordering(697) 00:16:23.563 fused_ordering(698) 00:16:23.563 fused_ordering(699) 00:16:23.563 fused_ordering(700) 00:16:23.563 fused_ordering(701) 00:16:23.563 fused_ordering(702) 00:16:23.563 fused_ordering(703) 00:16:23.563 fused_ordering(704) 00:16:23.563 fused_ordering(705) 00:16:23.563 fused_ordering(706) 00:16:23.563 fused_ordering(707) 00:16:23.563 fused_ordering(708) 00:16:23.563 fused_ordering(709) 00:16:23.563 fused_ordering(710) 00:16:23.563 fused_ordering(711) 00:16:23.563 fused_ordering(712) 00:16:23.563 fused_ordering(713) 00:16:23.563 fused_ordering(714) 00:16:23.563 fused_ordering(715) 00:16:23.563 fused_ordering(716) 00:16:23.563 fused_ordering(717) 00:16:23.563 fused_ordering(718) 00:16:23.563 fused_ordering(719) 00:16:23.563 fused_ordering(720) 00:16:23.563 fused_ordering(721) 00:16:23.563 fused_ordering(722) 00:16:23.563 fused_ordering(723) 00:16:23.563 fused_ordering(724) 00:16:23.563 fused_ordering(725) 00:16:23.563 fused_ordering(726) 00:16:23.563 fused_ordering(727) 00:16:23.563 fused_ordering(728) 00:16:23.563 fused_ordering(729) 00:16:23.563 fused_ordering(730) 00:16:23.563 fused_ordering(731) 00:16:23.563 fused_ordering(732) 00:16:23.563 fused_ordering(733) 00:16:23.563 fused_ordering(734) 00:16:23.563 fused_ordering(735) 00:16:23.563 fused_ordering(736) 00:16:23.563 fused_ordering(737) 00:16:23.563 fused_ordering(738) 00:16:23.563 fused_ordering(739) 00:16:23.563 fused_ordering(740) 00:16:23.563 fused_ordering(741) 00:16:23.563 fused_ordering(742) 00:16:23.563 fused_ordering(743) 00:16:23.563 fused_ordering(744) 00:16:23.563 fused_ordering(745) 00:16:23.563 fused_ordering(746) 00:16:23.563 fused_ordering(747) 00:16:23.563 fused_ordering(748) 00:16:23.563 fused_ordering(749) 00:16:23.563 fused_ordering(750) 00:16:23.563 fused_ordering(751) 00:16:23.563 fused_ordering(752) 00:16:23.563 fused_ordering(753) 00:16:23.563 fused_ordering(754) 00:16:23.563 fused_ordering(755) 00:16:23.563 fused_ordering(756) 00:16:23.563 fused_ordering(757) 00:16:23.563 fused_ordering(758) 00:16:23.563 fused_ordering(759) 00:16:23.563 fused_ordering(760) 00:16:23.563 fused_ordering(761) 00:16:23.563 fused_ordering(762) 00:16:23.563 fused_ordering(763) 00:16:23.563 fused_ordering(764) 00:16:23.563 fused_ordering(765) 00:16:23.563 fused_ordering(766) 00:16:23.563 fused_ordering(767) 00:16:23.563 fused_ordering(768) 00:16:23.563 fused_ordering(769) 00:16:23.563 fused_ordering(770) 00:16:23.563 fused_ordering(771) 00:16:23.563 fused_ordering(772) 00:16:23.563 fused_ordering(773) 00:16:23.563 fused_ordering(774) 00:16:23.563 fused_ordering(775) 00:16:23.563 fused_ordering(776) 00:16:23.563 fused_ordering(777) 00:16:23.563 fused_ordering(778) 00:16:23.563 fused_ordering(779) 00:16:23.563 fused_ordering(780) 00:16:23.563 fused_ordering(781) 00:16:23.563 fused_ordering(782) 00:16:23.563 fused_ordering(783) 00:16:23.563 fused_ordering(784) 00:16:23.563 fused_ordering(785) 00:16:23.563 fused_ordering(786) 00:16:23.563 fused_ordering(787) 00:16:23.563 fused_ordering(788) 00:16:23.563 fused_ordering(789) 00:16:23.563 fused_ordering(790) 00:16:23.563 fused_ordering(791) 00:16:23.563 fused_ordering(792) 00:16:23.563 fused_ordering(793) 00:16:23.563 fused_ordering(794) 00:16:23.563 fused_ordering(795) 00:16:23.563 fused_ordering(796) 00:16:23.563 fused_ordering(797) 00:16:23.563 fused_ordering(798) 00:16:23.563 fused_ordering(799) 00:16:23.563 fused_ordering(800) 00:16:23.563 fused_ordering(801) 00:16:23.563 fused_ordering(802) 00:16:23.563 fused_ordering(803) 00:16:23.563 fused_ordering(804) 00:16:23.563 fused_ordering(805) 00:16:23.563 fused_ordering(806) 00:16:23.563 fused_ordering(807) 00:16:23.563 fused_ordering(808) 00:16:23.563 fused_ordering(809) 00:16:23.563 fused_ordering(810) 00:16:23.563 fused_ordering(811) 00:16:23.563 fused_ordering(812) 00:16:23.563 fused_ordering(813) 00:16:23.563 fused_ordering(814) 00:16:23.563 fused_ordering(815) 00:16:23.563 fused_ordering(816) 00:16:23.563 fused_ordering(817) 00:16:23.563 fused_ordering(818) 00:16:23.563 fused_ordering(819) 00:16:23.563 fused_ordering(820) 00:16:23.823 fused_ordering(821) 00:16:23.823 fused_ordering(822) 00:16:23.823 fused_ordering(823) 00:16:23.823 fused_ordering(824) 00:16:23.823 fused_ordering(825) 00:16:23.823 fused_ordering(826) 00:16:23.823 fused_ordering(827) 00:16:23.823 fused_ordering(828) 00:16:23.823 fused_ordering(829) 00:16:23.823 fused_ordering(830) 00:16:23.823 fused_ordering(831) 00:16:23.823 fused_ordering(832) 00:16:23.823 fused_ordering(833) 00:16:23.823 fused_ordering(834) 00:16:23.823 fused_ordering(835) 00:16:23.823 fused_ordering(836) 00:16:23.823 fused_ordering(837) 00:16:23.823 fused_ordering(838) 00:16:23.823 fused_ordering(839) 00:16:23.823 fused_ordering(840) 00:16:23.823 fused_ordering(841) 00:16:23.823 fused_ordering(842) 00:16:23.823 fused_ordering(843) 00:16:23.823 fused_ordering(844) 00:16:23.823 fused_ordering(845) 00:16:23.823 fused_ordering(846) 00:16:23.823 fused_ordering(847) 00:16:23.823 fused_ordering(848) 00:16:23.823 fused_ordering(849) 00:16:23.823 fused_ordering(850) 00:16:23.823 fused_ordering(851) 00:16:23.823 fused_ordering(852) 00:16:23.823 fused_ordering(853) 00:16:23.823 fused_ordering(854) 00:16:23.823 fused_ordering(855) 00:16:23.823 fused_ordering(856) 00:16:23.823 fused_ordering(857) 00:16:23.823 fused_ordering(858) 00:16:23.823 fused_ordering(859) 00:16:23.823 fused_ordering(860) 00:16:23.823 fused_ordering(861) 00:16:23.823 fused_ordering(862) 00:16:23.823 fused_ordering(863) 00:16:23.823 fused_ordering(864) 00:16:23.823 fused_ordering(865) 00:16:23.823 fused_ordering(866) 00:16:23.823 fused_ordering(867) 00:16:23.823 fused_ordering(868) 00:16:23.823 fused_ordering(869) 00:16:23.823 fused_ordering(870) 00:16:23.823 fused_ordering(871) 00:16:23.823 fused_ordering(872) 00:16:23.823 fused_ordering(873) 00:16:23.823 fused_ordering(874) 00:16:23.823 fused_ordering(875) 00:16:23.823 fused_ordering(876) 00:16:23.823 fused_ordering(877) 00:16:23.823 fused_ordering(878) 00:16:23.823 fused_ordering(879) 00:16:23.823 fused_ordering(880) 00:16:23.823 fused_ordering(881) 00:16:23.823 fused_ordering(882) 00:16:23.823 fused_ordering(883) 00:16:23.823 fused_ordering(884) 00:16:23.823 fused_ordering(885) 00:16:23.823 fused_ordering(886) 00:16:23.823 fused_ordering(887) 00:16:23.823 fused_ordering(888) 00:16:23.823 fused_ordering(889) 00:16:23.823 fused_ordering(890) 00:16:23.823 fused_ordering(891) 00:16:23.823 fused_ordering(892) 00:16:23.823 fused_ordering(893) 00:16:23.823 fused_ordering(894) 00:16:23.823 fused_ordering(895) 00:16:23.823 fused_ordering(896) 00:16:23.823 fused_ordering(897) 00:16:23.823 fused_ordering(898) 00:16:23.823 fused_ordering(899) 00:16:23.823 fused_ordering(900) 00:16:23.823 fused_ordering(901) 00:16:23.823 fused_ordering(902) 00:16:23.823 fused_ordering(903) 00:16:23.823 fused_ordering(904) 00:16:23.823 fused_ordering(905) 00:16:23.823 fused_ordering(906) 00:16:23.823 fused_ordering(907) 00:16:23.823 fused_ordering(908) 00:16:23.823 fused_ordering(909) 00:16:23.823 fused_ordering(910) 00:16:23.823 fused_ordering(911) 00:16:23.823 fused_ordering(912) 00:16:23.823 fused_ordering(913) 00:16:23.823 fused_ordering(914) 00:16:23.823 fused_ordering(915) 00:16:23.823 fused_ordering(916) 00:16:23.823 fused_ordering(917) 00:16:23.823 fused_ordering(918) 00:16:23.823 fused_ordering(919) 00:16:23.823 fused_ordering(920) 00:16:23.823 fused_ordering(921) 00:16:23.823 fused_ordering(922) 00:16:23.823 fused_ordering(923) 00:16:23.823 fused_ordering(924) 00:16:23.823 fused_ordering(925) 00:16:23.823 fused_ordering(926) 00:16:23.823 fused_ordering(927) 00:16:23.823 fused_ordering(928) 00:16:23.823 fused_ordering(929) 00:16:23.823 fused_ordering(930) 00:16:23.823 fused_ordering(931) 00:16:23.823 fused_ordering(932) 00:16:23.823 fused_ordering(933) 00:16:23.823 fused_ordering(934) 00:16:23.823 fused_ordering(935) 00:16:23.823 fused_ordering(936) 00:16:23.823 fused_ordering(937) 00:16:23.823 fused_ordering(938) 00:16:23.823 fused_ordering(939) 00:16:23.823 fused_ordering(940) 00:16:23.823 fused_ordering(941) 00:16:23.823 fused_ordering(942) 00:16:23.823 fused_ordering(943) 00:16:23.823 fused_ordering(944) 00:16:23.823 fused_ordering(945) 00:16:23.823 fused_ordering(946) 00:16:23.823 fused_ordering(947) 00:16:23.823 fused_ordering(948) 00:16:23.823 fused_ordering(949) 00:16:23.823 fused_ordering(950) 00:16:23.823 fused_ordering(951) 00:16:23.823 fused_ordering(952) 00:16:23.823 fused_ordering(953) 00:16:23.823 fused_ordering(954) 00:16:23.823 fused_ordering(955) 00:16:23.823 fused_ordering(956) 00:16:23.823 fused_ordering(957) 00:16:23.823 fused_ordering(958) 00:16:23.823 fused_ordering(959) 00:16:23.823 fused_ordering(960) 00:16:23.823 fused_ordering(961) 00:16:23.823 fused_ordering(962) 00:16:23.823 fused_ordering(963) 00:16:23.823 fused_ordering(964) 00:16:23.823 fused_ordering(965) 00:16:23.823 fused_ordering(966) 00:16:23.823 fused_ordering(967) 00:16:23.823 fused_ordering(968) 00:16:23.823 fused_ordering(969) 00:16:23.823 fused_ordering(970) 00:16:23.823 fused_ordering(971) 00:16:23.823 fused_ordering(972) 00:16:23.823 fused_ordering(973) 00:16:23.823 fused_ordering(974) 00:16:23.823 fused_ordering(975) 00:16:23.823 fused_ordering(976) 00:16:23.823 fused_ordering(977) 00:16:23.823 fused_ordering(978) 00:16:23.823 fused_ordering(979) 00:16:23.823 fused_ordering(980) 00:16:23.823 fused_ordering(981) 00:16:23.823 fused_ordering(982) 00:16:23.823 fused_ordering(983) 00:16:23.823 fused_ordering(984) 00:16:23.823 fused_ordering(985) 00:16:23.823 fused_ordering(986) 00:16:23.823 fused_ordering(987) 00:16:23.823 fused_ordering(988) 00:16:23.823 fused_ordering(989) 00:16:23.823 fused_ordering(990) 00:16:23.823 fused_ordering(991) 00:16:23.823 fused_ordering(992) 00:16:23.823 fused_ordering(993) 00:16:23.823 fused_ordering(994) 00:16:23.823 fused_ordering(995) 00:16:23.823 fused_ordering(996) 00:16:23.823 fused_ordering(997) 00:16:23.823 fused_ordering(998) 00:16:23.823 fused_ordering(999) 00:16:23.823 fused_ordering(1000) 00:16:23.823 fused_ordering(1001) 00:16:23.823 fused_ordering(1002) 00:16:23.823 fused_ordering(1003) 00:16:23.823 fused_ordering(1004) 00:16:23.823 fused_ordering(1005) 00:16:23.823 fused_ordering(1006) 00:16:23.823 fused_ordering(1007) 00:16:23.823 fused_ordering(1008) 00:16:23.823 fused_ordering(1009) 00:16:23.823 fused_ordering(1010) 00:16:23.823 fused_ordering(1011) 00:16:23.823 fused_ordering(1012) 00:16:23.823 fused_ordering(1013) 00:16:23.823 fused_ordering(1014) 00:16:23.823 fused_ordering(1015) 00:16:23.823 fused_ordering(1016) 00:16:23.823 fused_ordering(1017) 00:16:23.823 fused_ordering(1018) 00:16:23.823 fused_ordering(1019) 00:16:23.823 fused_ordering(1020) 00:16:23.823 fused_ordering(1021) 00:16:23.823 fused_ordering(1022) 00:16:23.823 fused_ordering(1023) 00:16:23.823 11:10:44 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:23.823 11:10:44 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:23.823 11:10:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:23.823 11:10:44 -- nvmf/common.sh@116 -- # sync 00:16:23.823 11:10:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:23.823 11:10:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:23.823 11:10:44 -- nvmf/common.sh@119 -- # set +e 00:16:23.823 11:10:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:23.823 11:10:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:23.823 rmmod nvme_rdma 00:16:23.823 rmmod nvme_fabrics 00:16:23.823 11:10:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:23.823 11:10:44 -- nvmf/common.sh@123 -- # set -e 00:16:23.823 11:10:44 -- nvmf/common.sh@124 -- # return 0 00:16:23.823 11:10:44 -- nvmf/common.sh@477 -- # '[' -n 1594006 ']' 00:16:23.823 11:10:44 -- nvmf/common.sh@478 -- # killprocess 1594006 00:16:23.823 11:10:44 -- common/autotest_common.sh@936 -- # '[' -z 1594006 ']' 00:16:23.823 11:10:44 -- common/autotest_common.sh@940 -- # kill -0 1594006 00:16:23.823 11:10:44 -- common/autotest_common.sh@941 -- # uname 00:16:23.823 11:10:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.823 11:10:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1594006 00:16:23.823 11:10:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:23.823 11:10:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:23.823 11:10:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1594006' 00:16:23.823 killing process with pid 1594006 00:16:23.823 11:10:44 -- common/autotest_common.sh@955 -- # kill 1594006 00:16:23.823 11:10:44 -- common/autotest_common.sh@960 -- # wait 1594006 00:16:24.083 11:10:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:24.083 11:10:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:24.083 00:16:24.083 real 0m7.653s 00:16:24.083 user 0m4.280s 00:16:24.083 sys 0m4.558s 00:16:24.083 11:10:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:24.083 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:16:24.083 ************************************ 00:16:24.083 END TEST nvmf_fused_ordering 00:16:24.083 ************************************ 00:16:24.083 11:10:44 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:24.083 11:10:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:24.083 11:10:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.083 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:16:24.083 ************************************ 00:16:24.083 START TEST nvmf_delete_subsystem 00:16:24.083 ************************************ 00:16:24.083 11:10:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:24.343 * Looking for test storage... 00:16:24.343 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:24.343 11:10:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:24.343 11:10:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:24.343 11:10:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:24.343 11:10:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:24.343 11:10:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:24.343 11:10:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:24.343 11:10:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:24.343 11:10:44 -- scripts/common.sh@335 -- # IFS=.-: 00:16:24.343 11:10:44 -- scripts/common.sh@335 -- # read -ra ver1 00:16:24.343 11:10:44 -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.343 11:10:44 -- scripts/common.sh@336 -- # read -ra ver2 00:16:24.343 11:10:44 -- scripts/common.sh@337 -- # local 'op=<' 00:16:24.343 11:10:44 -- scripts/common.sh@339 -- # ver1_l=2 00:16:24.343 11:10:44 -- scripts/common.sh@340 -- # ver2_l=1 00:16:24.343 11:10:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:24.343 11:10:44 -- scripts/common.sh@343 -- # case "$op" in 00:16:24.343 11:10:44 -- scripts/common.sh@344 -- # : 1 00:16:24.343 11:10:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:24.343 11:10:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.343 11:10:44 -- scripts/common.sh@364 -- # decimal 1 00:16:24.343 11:10:44 -- scripts/common.sh@352 -- # local d=1 00:16:24.343 11:10:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.343 11:10:44 -- scripts/common.sh@354 -- # echo 1 00:16:24.343 11:10:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:24.343 11:10:44 -- scripts/common.sh@365 -- # decimal 2 00:16:24.343 11:10:44 -- scripts/common.sh@352 -- # local d=2 00:16:24.343 11:10:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.343 11:10:44 -- scripts/common.sh@354 -- # echo 2 00:16:24.343 11:10:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:24.343 11:10:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:24.343 11:10:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:24.343 11:10:44 -- scripts/common.sh@367 -- # return 0 00:16:24.343 11:10:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.343 11:10:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:24.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.343 --rc genhtml_branch_coverage=1 00:16:24.343 --rc genhtml_function_coverage=1 00:16:24.343 --rc genhtml_legend=1 00:16:24.343 --rc geninfo_all_blocks=1 00:16:24.343 --rc geninfo_unexecuted_blocks=1 00:16:24.343 00:16:24.343 ' 00:16:24.343 11:10:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:24.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.343 --rc genhtml_branch_coverage=1 00:16:24.343 --rc genhtml_function_coverage=1 00:16:24.343 --rc genhtml_legend=1 00:16:24.343 --rc geninfo_all_blocks=1 00:16:24.343 --rc geninfo_unexecuted_blocks=1 00:16:24.343 00:16:24.343 ' 00:16:24.343 11:10:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:24.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.343 --rc genhtml_branch_coverage=1 00:16:24.343 --rc genhtml_function_coverage=1 00:16:24.343 --rc genhtml_legend=1 00:16:24.343 --rc geninfo_all_blocks=1 00:16:24.343 --rc geninfo_unexecuted_blocks=1 00:16:24.343 00:16:24.343 ' 00:16:24.343 11:10:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:24.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.343 --rc genhtml_branch_coverage=1 00:16:24.343 --rc genhtml_function_coverage=1 00:16:24.343 --rc genhtml_legend=1 00:16:24.343 --rc geninfo_all_blocks=1 00:16:24.343 --rc geninfo_unexecuted_blocks=1 00:16:24.343 00:16:24.343 ' 00:16:24.343 11:10:44 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.343 11:10:44 -- nvmf/common.sh@7 -- # uname -s 00:16:24.343 11:10:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.343 11:10:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.343 11:10:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.343 11:10:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.343 11:10:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.343 11:10:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.343 11:10:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.343 11:10:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.343 11:10:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.343 11:10:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.343 11:10:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:24.343 11:10:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:24.343 11:10:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.343 11:10:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.343 11:10:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.343 11:10:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:24.343 11:10:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.343 11:10:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.343 11:10:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.343 11:10:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.343 11:10:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.343 11:10:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.344 11:10:44 -- paths/export.sh@5 -- # export PATH 00:16:24.344 11:10:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.344 11:10:44 -- nvmf/common.sh@46 -- # : 0 00:16:24.344 11:10:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:24.344 11:10:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:24.344 11:10:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:24.344 11:10:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.344 11:10:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.344 11:10:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:24.344 11:10:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:24.344 11:10:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:24.344 11:10:44 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:24.344 11:10:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:24.344 11:10:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.344 11:10:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:24.344 11:10:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:24.344 11:10:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:24.344 11:10:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.344 11:10:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.344 11:10:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.344 11:10:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:24.344 11:10:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:24.344 11:10:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:24.344 11:10:44 -- common/autotest_common.sh@10 -- # set +x 00:16:29.618 11:10:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:29.618 11:10:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:29.618 11:10:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:29.618 11:10:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:29.618 11:10:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:29.618 11:10:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:29.618 11:10:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:29.618 11:10:49 -- nvmf/common.sh@294 -- # net_devs=() 00:16:29.618 11:10:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:29.618 11:10:49 -- nvmf/common.sh@295 -- # e810=() 00:16:29.618 11:10:49 -- nvmf/common.sh@295 -- # local -ga e810 00:16:29.618 11:10:49 -- nvmf/common.sh@296 -- # x722=() 00:16:29.618 11:10:49 -- nvmf/common.sh@296 -- # local -ga x722 00:16:29.618 11:10:49 -- nvmf/common.sh@297 -- # mlx=() 00:16:29.618 11:10:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:29.618 11:10:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.618 11:10:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:29.618 11:10:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:29.618 11:10:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:29.618 11:10:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:29.618 11:10:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:29.618 11:10:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:29.618 11:10:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:29.618 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:29.618 11:10:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:29.618 11:10:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:29.618 11:10:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:29.618 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:29.618 11:10:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:29.618 11:10:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:29.618 11:10:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:29.618 11:10:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.619 11:10:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:29.619 11:10:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.619 11:10:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:29.619 Found net devices under 0000:18:00.0: mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.619 11:10:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.619 11:10:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:29.619 11:10:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.619 11:10:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:29.619 Found net devices under 0000:18:00.1: mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.619 11:10:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:29.619 11:10:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:29.619 11:10:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:29.619 11:10:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:29.619 11:10:49 -- nvmf/common.sh@57 -- # uname 00:16:29.619 11:10:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:29.619 11:10:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:29.619 11:10:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:29.619 11:10:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:29.619 11:10:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:29.619 11:10:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:29.619 11:10:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:29.619 11:10:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:29.619 11:10:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:29.619 11:10:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:29.619 11:10:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:29.619 11:10:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:29.619 11:10:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:29.619 11:10:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:29.619 11:10:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:29.619 11:10:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:29.619 11:10:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@104 -- # continue 2 00:16:29.619 11:10:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@104 -- # continue 2 00:16:29.619 11:10:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:29.619 11:10:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:29.619 11:10:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:29.619 11:10:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:29.619 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:29.619 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:29.619 altname enp24s0f0np0 00:16:29.619 altname ens785f0np0 00:16:29.619 inet 192.168.100.8/24 scope global mlx_0_0 00:16:29.619 valid_lft forever preferred_lft forever 00:16:29.619 11:10:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:29.619 11:10:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:29.619 11:10:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:29.619 11:10:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:29.619 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:29.619 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:29.619 altname enp24s0f1np1 00:16:29.619 altname ens785f1np1 00:16:29.619 inet 192.168.100.9/24 scope global mlx_0_1 00:16:29.619 valid_lft forever preferred_lft forever 00:16:29.619 11:10:49 -- nvmf/common.sh@410 -- # return 0 00:16:29.619 11:10:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:29.619 11:10:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:29.619 11:10:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:29.619 11:10:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:29.619 11:10:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:29.619 11:10:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:29.619 11:10:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:29.619 11:10:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:29.619 11:10:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:29.619 11:10:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@104 -- # continue 2 00:16:29.619 11:10:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.619 11:10:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:29.619 11:10:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@104 -- # continue 2 00:16:29.619 11:10:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:29.619 11:10:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:29.619 11:10:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:29.619 11:10:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:29.619 11:10:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:29.619 11:10:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:29.619 192.168.100.9' 00:16:29.619 11:10:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:29.619 192.168.100.9' 00:16:29.619 11:10:49 -- nvmf/common.sh@445 -- # head -n 1 00:16:29.619 11:10:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:29.619 11:10:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:29.619 192.168.100.9' 00:16:29.619 11:10:49 -- nvmf/common.sh@446 -- # tail -n +2 00:16:29.619 11:10:49 -- nvmf/common.sh@446 -- # head -n 1 00:16:29.619 11:10:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:29.619 11:10:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:29.619 11:10:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:29.619 11:10:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:29.619 11:10:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:29.619 11:10:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:29.619 11:10:50 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:29.619 11:10:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:29.619 11:10:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:29.619 11:10:50 -- common/autotest_common.sh@10 -- # set +x 00:16:29.619 11:10:50 -- nvmf/common.sh@469 -- # nvmfpid=1597561 00:16:29.619 11:10:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:29.619 11:10:50 -- nvmf/common.sh@470 -- # waitforlisten 1597561 00:16:29.619 11:10:50 -- common/autotest_common.sh@829 -- # '[' -z 1597561 ']' 00:16:29.619 11:10:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.619 11:10:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.619 11:10:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.619 11:10:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.619 11:10:50 -- common/autotest_common.sh@10 -- # set +x 00:16:29.619 [2024-12-13 11:10:50.065577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:29.619 [2024-12-13 11:10:50.065622] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.619 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.619 [2024-12-13 11:10:50.119507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:29.879 [2024-12-13 11:10:50.191996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:29.879 [2024-12-13 11:10:50.192099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.879 [2024-12-13 11:10:50.192106] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.879 [2024-12-13 11:10:50.192112] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.879 [2024-12-13 11:10:50.192146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.879 [2024-12-13 11:10:50.192149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.447 11:10:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.447 11:10:50 -- common/autotest_common.sh@862 -- # return 0 00:16:30.447 11:10:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:30.447 11:10:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.447 11:10:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.447 11:10:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.447 11:10:50 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:30.447 11:10:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.447 11:10:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.447 [2024-12-13 11:10:50.913404] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13e9320/0x13ed810) succeed. 00:16:30.447 [2024-12-13 11:10:50.921331] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13ea820/0x142eeb0) succeed. 00:16:30.447 11:10:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.447 11:10:50 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:30.447 11:10:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.447 11:10:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.447 11:10:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.447 11:10:51 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:30.447 11:10:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.447 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:16:30.447 [2024-12-13 11:10:51.005243] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:30.447 11:10:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.447 11:10:51 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:30.447 11:10:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.447 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:16:30.706 NULL1 00:16:30.706 11:10:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.706 11:10:51 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:30.706 11:10:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.706 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:16:30.706 Delay0 00:16:30.706 11:10:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.706 11:10:51 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.706 11:10:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.706 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:16:30.706 11:10:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.706 11:10:51 -- target/delete_subsystem.sh@28 -- # perf_pid=1597697 00:16:30.706 11:10:51 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:30.706 11:10:51 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:30.706 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.706 [2024-12-13 11:10:51.101440] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:32.614 11:10:53 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.614 11:10:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.614 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:16:33.993 NVMe io qpair process completion error 00:16:33.993 NVMe io qpair process completion error 00:16:33.993 NVMe io qpair process completion error 00:16:33.993 NVMe io qpair process completion error 00:16:33.993 NVMe io qpair process completion error 00:16:33.993 NVMe io qpair process completion error 00:16:33.993 11:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.993 11:10:54 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:33.993 11:10:54 -- target/delete_subsystem.sh@35 -- # kill -0 1597697 00:16:33.993 11:10:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:34.252 11:10:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:34.252 11:10:54 -- target/delete_subsystem.sh@35 -- # kill -0 1597697 00:16:34.252 11:10:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Write completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.820 Read completed with error (sct=0, sc=8) 00:16:34.820 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 starting I/O failed: -6 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Write completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 Read completed with error (sct=0, sc=8) 00:16:34.821 11:10:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:34.821 11:10:55 -- target/delete_subsystem.sh@35 -- # kill -0 1597697 00:16:34.821 11:10:55 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:34.821 [2024-12-13 11:10:55.186601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:34.821 [2024-12-13 11:10:55.186636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:34.821 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:34.821 Initializing NVMe Controllers 00:16:34.821 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:34.821 Controller IO queue size 128, less than required. 00:16:34.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:34.821 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:34.821 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:34.821 Initialization complete. Launching workers. 00:16:34.821 ======================================================== 00:16:34.821 Latency(us) 00:16:34.821 Device Information : IOPS MiB/s Average min max 00:16:34.821 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.46 0.04 1593920.42 1000071.78 2977417.20 00:16:34.821 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.46 0.04 1595356.23 1000526.97 2978852.42 00:16:34.821 ======================================================== 00:16:34.821 Total : 160.93 0.08 1594638.33 1000071.78 2978852.42 00:16:34.821 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@35 -- # kill -0 1597697 00:16:35.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1597697) - No such process 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@45 -- # NOT wait 1597697 00:16:35.390 11:10:55 -- common/autotest_common.sh@650 -- # local es=0 00:16:35.390 11:10:55 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1597697 00:16:35.390 11:10:55 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:35.390 11:10:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.390 11:10:55 -- common/autotest_common.sh@642 -- # type -t wait 00:16:35.390 11:10:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.390 11:10:55 -- common/autotest_common.sh@653 -- # wait 1597697 00:16:35.390 11:10:55 -- common/autotest_common.sh@653 -- # es=1 00:16:35.390 11:10:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:35.390 11:10:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:35.390 11:10:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:35.390 11:10:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.390 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:16:35.390 11:10:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:35.390 11:10:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.390 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:16:35.390 [2024-12-13 11:10:55.705114] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:35.390 11:10:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.390 11:10:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.390 11:10:55 -- common/autotest_common.sh@10 -- # set +x 00:16:35.390 11:10:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@54 -- # perf_pid=1598543 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:35.390 11:10:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:35.390 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.390 [2024-12-13 11:10:55.785553] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:35.958 11:10:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:35.958 11:10:56 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:35.958 11:10:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:36.217 11:10:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:36.217 11:10:56 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:36.217 11:10:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:36.785 11:10:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:36.785 11:10:57 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:36.785 11:10:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:37.352 11:10:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:37.352 11:10:57 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:37.352 11:10:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:37.920 11:10:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:37.920 11:10:58 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:37.920 11:10:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:38.178 11:10:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:38.178 11:10:58 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:38.178 11:10:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:38.746 11:10:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:38.746 11:10:59 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:38.746 11:10:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.314 11:10:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:39.314 11:10:59 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:39.314 11:10:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:39.879 11:11:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:39.879 11:11:00 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:39.879 11:11:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:40.447 11:11:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:40.447 11:11:00 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:40.447 11:11:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:40.705 11:11:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:40.705 11:11:01 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:40.705 11:11:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:41.273 11:11:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:41.273 11:11:01 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:41.273 11:11:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:41.841 11:11:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:41.841 11:11:02 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:41.841 11:11:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:42.409 11:11:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:42.409 11:11:02 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:42.409 11:11:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:42.409 Initializing NVMe Controllers 00:16:42.409 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.409 Controller IO queue size 128, less than required. 00:16:42.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.409 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:42.409 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:42.409 Initialization complete. Launching workers. 00:16:42.409 ======================================================== 00:16:42.409 Latency(us) 00:16:42.409 Device Information : IOPS MiB/s Average min max 00:16:42.409 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001129.52 1000052.63 1003806.90 00:16:42.409 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002293.66 1000062.93 1005572.06 00:16:42.409 ======================================================== 00:16:42.409 Total : 256.00 0.12 1001711.59 1000052.63 1005572.06 00:16:42.409 00:16:42.977 11:11:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:42.977 11:11:03 -- target/delete_subsystem.sh@57 -- # kill -0 1598543 00:16:42.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1598543) - No such process 00:16:42.977 11:11:03 -- target/delete_subsystem.sh@67 -- # wait 1598543 00:16:42.977 11:11:03 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:42.977 11:11:03 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:42.977 11:11:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.977 11:11:03 -- nvmf/common.sh@116 -- # sync 00:16:42.977 11:11:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:42.977 11:11:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:42.977 11:11:03 -- nvmf/common.sh@119 -- # set +e 00:16:42.977 11:11:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.977 11:11:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:42.977 rmmod nvme_rdma 00:16:42.977 rmmod nvme_fabrics 00:16:42.977 11:11:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.977 11:11:03 -- nvmf/common.sh@123 -- # set -e 00:16:42.977 11:11:03 -- nvmf/common.sh@124 -- # return 0 00:16:42.977 11:11:03 -- nvmf/common.sh@477 -- # '[' -n 1597561 ']' 00:16:42.977 11:11:03 -- nvmf/common.sh@478 -- # killprocess 1597561 00:16:42.977 11:11:03 -- common/autotest_common.sh@936 -- # '[' -z 1597561 ']' 00:16:42.977 11:11:03 -- common/autotest_common.sh@940 -- # kill -0 1597561 00:16:42.977 11:11:03 -- common/autotest_common.sh@941 -- # uname 00:16:42.977 11:11:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.977 11:11:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1597561 00:16:42.977 11:11:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.977 11:11:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.977 11:11:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1597561' 00:16:42.977 killing process with pid 1597561 00:16:42.977 11:11:03 -- common/autotest_common.sh@955 -- # kill 1597561 00:16:42.977 11:11:03 -- common/autotest_common.sh@960 -- # wait 1597561 00:16:43.237 11:11:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:43.237 11:11:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:43.237 00:16:43.237 real 0m18.990s 00:16:43.237 user 0m49.561s 00:16:43.237 sys 0m5.052s 00:16:43.237 11:11:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:43.237 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:16:43.237 ************************************ 00:16:43.237 END TEST nvmf_delete_subsystem 00:16:43.237 ************************************ 00:16:43.237 11:11:03 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:43.237 11:11:03 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:43.237 11:11:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.237 11:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.237 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:16:43.237 ************************************ 00:16:43.237 START TEST nvmf_nvme_cli 00:16:43.237 ************************************ 00:16:43.237 11:11:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:43.237 * Looking for test storage... 00:16:43.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:43.237 11:11:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:43.237 11:11:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:43.237 11:11:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:43.496 11:11:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:43.496 11:11:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:43.496 11:11:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:43.496 11:11:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:43.496 11:11:03 -- scripts/common.sh@335 -- # IFS=.-: 00:16:43.496 11:11:03 -- scripts/common.sh@335 -- # read -ra ver1 00:16:43.496 11:11:03 -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.496 11:11:03 -- scripts/common.sh@336 -- # read -ra ver2 00:16:43.496 11:11:03 -- scripts/common.sh@337 -- # local 'op=<' 00:16:43.496 11:11:03 -- scripts/common.sh@339 -- # ver1_l=2 00:16:43.496 11:11:03 -- scripts/common.sh@340 -- # ver2_l=1 00:16:43.496 11:11:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:43.496 11:11:03 -- scripts/common.sh@343 -- # case "$op" in 00:16:43.496 11:11:03 -- scripts/common.sh@344 -- # : 1 00:16:43.496 11:11:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:43.496 11:11:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.496 11:11:03 -- scripts/common.sh@364 -- # decimal 1 00:16:43.496 11:11:03 -- scripts/common.sh@352 -- # local d=1 00:16:43.496 11:11:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.496 11:11:03 -- scripts/common.sh@354 -- # echo 1 00:16:43.496 11:11:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:43.496 11:11:03 -- scripts/common.sh@365 -- # decimal 2 00:16:43.496 11:11:03 -- scripts/common.sh@352 -- # local d=2 00:16:43.496 11:11:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.496 11:11:03 -- scripts/common.sh@354 -- # echo 2 00:16:43.496 11:11:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:43.496 11:11:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:43.496 11:11:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:43.496 11:11:03 -- scripts/common.sh@367 -- # return 0 00:16:43.496 11:11:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.496 11:11:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.496 --rc genhtml_branch_coverage=1 00:16:43.496 --rc genhtml_function_coverage=1 00:16:43.496 --rc genhtml_legend=1 00:16:43.496 --rc geninfo_all_blocks=1 00:16:43.496 --rc geninfo_unexecuted_blocks=1 00:16:43.496 00:16:43.496 ' 00:16:43.496 11:11:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.496 --rc genhtml_branch_coverage=1 00:16:43.496 --rc genhtml_function_coverage=1 00:16:43.496 --rc genhtml_legend=1 00:16:43.496 --rc geninfo_all_blocks=1 00:16:43.496 --rc geninfo_unexecuted_blocks=1 00:16:43.496 00:16:43.496 ' 00:16:43.496 11:11:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.496 --rc genhtml_branch_coverage=1 00:16:43.496 --rc genhtml_function_coverage=1 00:16:43.496 --rc genhtml_legend=1 00:16:43.497 --rc geninfo_all_blocks=1 00:16:43.497 --rc geninfo_unexecuted_blocks=1 00:16:43.497 00:16:43.497 ' 00:16:43.497 11:11:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:43.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.497 --rc genhtml_branch_coverage=1 00:16:43.497 --rc genhtml_function_coverage=1 00:16:43.497 --rc genhtml_legend=1 00:16:43.497 --rc geninfo_all_blocks=1 00:16:43.497 --rc geninfo_unexecuted_blocks=1 00:16:43.497 00:16:43.497 ' 00:16:43.497 11:11:03 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.497 11:11:03 -- nvmf/common.sh@7 -- # uname -s 00:16:43.497 11:11:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.497 11:11:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.497 11:11:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.497 11:11:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.497 11:11:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.497 11:11:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.497 11:11:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.497 11:11:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.497 11:11:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.497 11:11:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.497 11:11:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:43.497 11:11:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:43.497 11:11:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.497 11:11:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.497 11:11:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.497 11:11:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:43.497 11:11:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.497 11:11:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.497 11:11:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.497 11:11:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.497 11:11:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.497 11:11:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.497 11:11:03 -- paths/export.sh@5 -- # export PATH 00:16:43.497 11:11:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.497 11:11:03 -- nvmf/common.sh@46 -- # : 0 00:16:43.497 11:11:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:43.497 11:11:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:43.497 11:11:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:43.497 11:11:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.497 11:11:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.497 11:11:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:43.497 11:11:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:43.497 11:11:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:43.497 11:11:03 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.497 11:11:03 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.497 11:11:03 -- target/nvme_cli.sh@14 -- # devs=() 00:16:43.497 11:11:03 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:43.497 11:11:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:43.497 11:11:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.497 11:11:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:43.497 11:11:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:43.497 11:11:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:43.497 11:11:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.497 11:11:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.497 11:11:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.497 11:11:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:43.497 11:11:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:43.497 11:11:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:43.497 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:16:48.768 11:11:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:48.768 11:11:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:48.768 11:11:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:48.768 11:11:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:48.768 11:11:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:48.768 11:11:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:48.768 11:11:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:48.768 11:11:08 -- nvmf/common.sh@294 -- # net_devs=() 00:16:48.768 11:11:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:48.768 11:11:08 -- nvmf/common.sh@295 -- # e810=() 00:16:48.768 11:11:08 -- nvmf/common.sh@295 -- # local -ga e810 00:16:48.768 11:11:08 -- nvmf/common.sh@296 -- # x722=() 00:16:48.768 11:11:08 -- nvmf/common.sh@296 -- # local -ga x722 00:16:48.768 11:11:08 -- nvmf/common.sh@297 -- # mlx=() 00:16:48.768 11:11:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:48.768 11:11:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.768 11:11:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:48.768 11:11:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:48.768 11:11:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:48.768 11:11:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:48.768 11:11:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:48.768 11:11:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:48.768 11:11:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:48.768 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:48.768 11:11:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:48.768 11:11:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:48.768 11:11:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:48.768 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:48.768 11:11:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:48.768 11:11:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:48.768 11:11:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:48.768 11:11:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.768 11:11:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:48.768 11:11:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.768 11:11:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:48.768 Found net devices under 0000:18:00.0: mlx_0_0 00:16:48.768 11:11:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.768 11:11:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:48.768 11:11:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.768 11:11:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:48.768 11:11:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.768 11:11:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:48.768 Found net devices under 0000:18:00.1: mlx_0_1 00:16:48.768 11:11:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.768 11:11:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:48.768 11:11:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:48.768 11:11:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:48.768 11:11:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:48.768 11:11:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:48.768 11:11:08 -- nvmf/common.sh@57 -- # uname 00:16:48.768 11:11:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:48.768 11:11:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:48.768 11:11:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:48.768 11:11:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:48.768 11:11:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:48.768 11:11:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:48.768 11:11:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:48.768 11:11:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:48.768 11:11:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:48.769 11:11:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:48.769 11:11:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:48.769 11:11:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:48.769 11:11:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:48.769 11:11:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:48.769 11:11:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:48.769 11:11:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:48.769 11:11:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@104 -- # continue 2 00:16:48.769 11:11:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@104 -- # continue 2 00:16:48.769 11:11:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:48.769 11:11:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.769 11:11:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:48.769 11:11:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:48.769 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:48.769 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:48.769 altname enp24s0f0np0 00:16:48.769 altname ens785f0np0 00:16:48.769 inet 192.168.100.8/24 scope global mlx_0_0 00:16:48.769 valid_lft forever preferred_lft forever 00:16:48.769 11:11:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:48.769 11:11:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.769 11:11:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:48.769 11:11:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:48.769 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:48.769 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:48.769 altname enp24s0f1np1 00:16:48.769 altname ens785f1np1 00:16:48.769 inet 192.168.100.9/24 scope global mlx_0_1 00:16:48.769 valid_lft forever preferred_lft forever 00:16:48.769 11:11:08 -- nvmf/common.sh@410 -- # return 0 00:16:48.769 11:11:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.769 11:11:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:48.769 11:11:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:48.769 11:11:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:48.769 11:11:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:48.769 11:11:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:48.769 11:11:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:48.769 11:11:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:48.769 11:11:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:48.769 11:11:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@104 -- # continue 2 00:16:48.769 11:11:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.769 11:11:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:48.769 11:11:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@104 -- # continue 2 00:16:48.769 11:11:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:48.769 11:11:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.769 11:11:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:48.769 11:11:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.769 11:11:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.769 11:11:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:48.769 192.168.100.9' 00:16:48.769 11:11:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:48.769 192.168.100.9' 00:16:48.769 11:11:08 -- nvmf/common.sh@445 -- # head -n 1 00:16:48.769 11:11:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:48.769 11:11:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:48.769 192.168.100.9' 00:16:48.769 11:11:08 -- nvmf/common.sh@446 -- # head -n 1 00:16:48.769 11:11:08 -- nvmf/common.sh@446 -- # tail -n +2 00:16:48.769 11:11:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:48.769 11:11:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:48.769 11:11:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:48.769 11:11:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:48.769 11:11:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:48.769 11:11:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:48.769 11:11:08 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:48.769 11:11:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.769 11:11:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.769 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:16:48.769 11:11:08 -- nvmf/common.sh@469 -- # nvmfpid=1603128 00:16:48.769 11:11:08 -- nvmf/common.sh@470 -- # waitforlisten 1603128 00:16:48.769 11:11:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.769 11:11:08 -- common/autotest_common.sh@829 -- # '[' -z 1603128 ']' 00:16:48.769 11:11:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.769 11:11:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.769 11:11:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.769 11:11:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.769 11:11:08 -- common/autotest_common.sh@10 -- # set +x 00:16:48.769 [2024-12-13 11:11:08.925734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.769 [2024-12-13 11:11:08.925786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.769 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.769 [2024-12-13 11:11:08.982529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.769 [2024-12-13 11:11:09.053987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.769 [2024-12-13 11:11:09.054112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.769 [2024-12-13 11:11:09.054119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.769 [2024-12-13 11:11:09.054124] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.769 [2024-12-13 11:11:09.054168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.769 [2024-12-13 11:11:09.054304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.769 [2024-12-13 11:11:09.054343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.769 [2024-12-13 11:11:09.054345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.337 11:11:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.337 11:11:09 -- common/autotest_common.sh@862 -- # return 0 00:16:49.337 11:11:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.337 11:11:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.337 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.337 11:11:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.337 11:11:09 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:49.337 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.337 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.337 [2024-12-13 11:11:09.776107] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd1e960/0xd22e50) succeed. 00:16:49.337 [2024-12-13 11:11:09.784316] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd1ff50/0xd644f0) succeed. 00:16:49.337 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.337 11:11:09 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:49.337 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.337 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 Malloc0 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:49.597 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.597 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 Malloc1 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:49.597 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.597 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.597 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.597 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.597 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.597 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:49.597 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.597 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 [2024-12-13 11:11:09.965252] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:49.597 11:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.597 11:11:09 -- common/autotest_common.sh@10 -- # set +x 00:16:49.597 11:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.597 11:11:09 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:16:49.597 00:16:49.597 Discovery Log Number of Records 2, Generation counter 2 00:16:49.597 =====Discovery Log Entry 0====== 00:16:49.597 trtype: rdma 00:16:49.597 adrfam: ipv4 00:16:49.597 subtype: current discovery subsystem 00:16:49.597 treq: not required 00:16:49.597 portid: 0 00:16:49.597 trsvcid: 4420 00:16:49.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:49.597 traddr: 192.168.100.8 00:16:49.597 eflags: explicit discovery connections, duplicate discovery information 00:16:49.597 rdma_prtype: not specified 00:16:49.597 rdma_qptype: connected 00:16:49.597 rdma_cms: rdma-cm 00:16:49.597 rdma_pkey: 0x0000 00:16:49.597 =====Discovery Log Entry 1====== 00:16:49.597 trtype: rdma 00:16:49.597 adrfam: ipv4 00:16:49.597 subtype: nvme subsystem 00:16:49.597 treq: not required 00:16:49.597 portid: 0 00:16:49.597 trsvcid: 4420 00:16:49.597 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:49.597 traddr: 192.168.100.8 00:16:49.597 eflags: none 00:16:49.597 rdma_prtype: not specified 00:16:49.597 rdma_qptype: connected 00:16:49.597 rdma_cms: rdma-cm 00:16:49.597 rdma_pkey: 0x0000 00:16:49.597 11:11:10 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:49.597 11:11:10 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:49.597 11:11:10 -- nvmf/common.sh@510 -- # local dev _ 00:16:49.597 11:11:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:49.597 11:11:10 -- nvmf/common.sh@509 -- # nvme list 00:16:49.597 11:11:10 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:49.597 11:11:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:49.597 11:11:10 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:49.597 11:11:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:49.597 11:11:10 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:49.597 11:11:10 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:50.534 11:11:11 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:50.534 11:11:11 -- common/autotest_common.sh@1187 -- # local i=0 00:16:50.534 11:11:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.534 11:11:11 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:16:50.534 11:11:11 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:16:50.534 11:11:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:53.069 11:11:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:53.069 11:11:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:53.069 11:11:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.069 11:11:13 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:16:53.069 11:11:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.069 11:11:13 -- common/autotest_common.sh@1197 -- # return 0 00:16:53.069 11:11:13 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:53.069 11:11:13 -- nvmf/common.sh@510 -- # local dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@509 -- # nvme list 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:53.069 /dev/nvme0n2 ]] 00:16:53.069 11:11:13 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:53.069 11:11:13 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:53.069 11:11:13 -- nvmf/common.sh@510 -- # local dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@509 -- # nvme list 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:53.069 11:11:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:53.069 11:11:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:53.069 11:11:13 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:53.069 11:11:13 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.636 11:11:14 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.636 11:11:14 -- common/autotest_common.sh@1208 -- # local i=0 00:16:53.636 11:11:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:53.636 11:11:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.636 11:11:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:53.636 11:11:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.636 11:11:14 -- common/autotest_common.sh@1220 -- # return 0 00:16:53.636 11:11:14 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:53.636 11:11:14 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.636 11:11:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.636 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.636 11:11:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.636 11:11:14 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:53.636 11:11:14 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:53.636 11:11:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:53.636 11:11:14 -- nvmf/common.sh@116 -- # sync 00:16:53.636 11:11:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:53.636 11:11:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:53.636 11:11:14 -- nvmf/common.sh@119 -- # set +e 00:16:53.636 11:11:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:53.636 11:11:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:53.636 rmmod nvme_rdma 00:16:53.636 rmmod nvme_fabrics 00:16:53.636 11:11:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:53.636 11:11:14 -- nvmf/common.sh@123 -- # set -e 00:16:53.636 11:11:14 -- nvmf/common.sh@124 -- # return 0 00:16:53.636 11:11:14 -- nvmf/common.sh@477 -- # '[' -n 1603128 ']' 00:16:53.636 11:11:14 -- nvmf/common.sh@478 -- # killprocess 1603128 00:16:53.636 11:11:14 -- common/autotest_common.sh@936 -- # '[' -z 1603128 ']' 00:16:53.636 11:11:14 -- common/autotest_common.sh@940 -- # kill -0 1603128 00:16:53.636 11:11:14 -- common/autotest_common.sh@941 -- # uname 00:16:53.636 11:11:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.636 11:11:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1603128 00:16:53.896 11:11:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.896 11:11:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.896 11:11:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1603128' 00:16:53.896 killing process with pid 1603128 00:16:53.896 11:11:14 -- common/autotest_common.sh@955 -- # kill 1603128 00:16:53.896 11:11:14 -- common/autotest_common.sh@960 -- # wait 1603128 00:16:54.155 11:11:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:54.155 11:11:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:54.155 00:16:54.155 real 0m10.871s 00:16:54.155 user 0m23.020s 00:16:54.155 sys 0m4.409s 00:16:54.155 11:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:54.155 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:16:54.155 ************************************ 00:16:54.155 END TEST nvmf_nvme_cli 00:16:54.155 ************************************ 00:16:54.155 11:11:14 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:16:54.155 11:11:14 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:16:54.155 11:11:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.155 11:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.155 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:16:54.155 ************************************ 00:16:54.155 START TEST nvmf_host_management 00:16:54.155 ************************************ 00:16:54.155 11:11:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:16:54.155 * Looking for test storage... 00:16:54.155 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:54.155 11:11:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:54.155 11:11:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:54.155 11:11:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:54.415 11:11:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:54.415 11:11:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:54.415 11:11:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:54.415 11:11:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:54.415 11:11:14 -- scripts/common.sh@335 -- # IFS=.-: 00:16:54.415 11:11:14 -- scripts/common.sh@335 -- # read -ra ver1 00:16:54.415 11:11:14 -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.415 11:11:14 -- scripts/common.sh@336 -- # read -ra ver2 00:16:54.415 11:11:14 -- scripts/common.sh@337 -- # local 'op=<' 00:16:54.415 11:11:14 -- scripts/common.sh@339 -- # ver1_l=2 00:16:54.415 11:11:14 -- scripts/common.sh@340 -- # ver2_l=1 00:16:54.415 11:11:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:54.415 11:11:14 -- scripts/common.sh@343 -- # case "$op" in 00:16:54.415 11:11:14 -- scripts/common.sh@344 -- # : 1 00:16:54.415 11:11:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:54.415 11:11:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.415 11:11:14 -- scripts/common.sh@364 -- # decimal 1 00:16:54.415 11:11:14 -- scripts/common.sh@352 -- # local d=1 00:16:54.415 11:11:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.415 11:11:14 -- scripts/common.sh@354 -- # echo 1 00:16:54.415 11:11:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:54.415 11:11:14 -- scripts/common.sh@365 -- # decimal 2 00:16:54.415 11:11:14 -- scripts/common.sh@352 -- # local d=2 00:16:54.415 11:11:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.415 11:11:14 -- scripts/common.sh@354 -- # echo 2 00:16:54.415 11:11:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:54.415 11:11:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:54.415 11:11:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:54.415 11:11:14 -- scripts/common.sh@367 -- # return 0 00:16:54.415 11:11:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.415 11:11:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:54.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.415 --rc genhtml_branch_coverage=1 00:16:54.415 --rc genhtml_function_coverage=1 00:16:54.415 --rc genhtml_legend=1 00:16:54.416 --rc geninfo_all_blocks=1 00:16:54.416 --rc geninfo_unexecuted_blocks=1 00:16:54.416 00:16:54.416 ' 00:16:54.416 11:11:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.416 --rc genhtml_branch_coverage=1 00:16:54.416 --rc genhtml_function_coverage=1 00:16:54.416 --rc genhtml_legend=1 00:16:54.416 --rc geninfo_all_blocks=1 00:16:54.416 --rc geninfo_unexecuted_blocks=1 00:16:54.416 00:16:54.416 ' 00:16:54.416 11:11:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.416 --rc genhtml_branch_coverage=1 00:16:54.416 --rc genhtml_function_coverage=1 00:16:54.416 --rc genhtml_legend=1 00:16:54.416 --rc geninfo_all_blocks=1 00:16:54.416 --rc geninfo_unexecuted_blocks=1 00:16:54.416 00:16:54.416 ' 00:16:54.416 11:11:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.416 --rc genhtml_branch_coverage=1 00:16:54.416 --rc genhtml_function_coverage=1 00:16:54.416 --rc genhtml_legend=1 00:16:54.416 --rc geninfo_all_blocks=1 00:16:54.416 --rc geninfo_unexecuted_blocks=1 00:16:54.416 00:16:54.416 ' 00:16:54.416 11:11:14 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.416 11:11:14 -- nvmf/common.sh@7 -- # uname -s 00:16:54.416 11:11:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.416 11:11:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.416 11:11:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.416 11:11:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.416 11:11:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.416 11:11:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.416 11:11:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.416 11:11:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.416 11:11:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.416 11:11:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.416 11:11:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:54.416 11:11:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:54.416 11:11:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.416 11:11:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.416 11:11:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.416 11:11:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:54.416 11:11:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.416 11:11:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.416 11:11:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.416 11:11:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.416 11:11:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.416 11:11:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.416 11:11:14 -- paths/export.sh@5 -- # export PATH 00:16:54.416 11:11:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.416 11:11:14 -- nvmf/common.sh@46 -- # : 0 00:16:54.416 11:11:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:54.416 11:11:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:54.416 11:11:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:54.416 11:11:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.416 11:11:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.416 11:11:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:54.416 11:11:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:54.416 11:11:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:54.416 11:11:14 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.416 11:11:14 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.416 11:11:14 -- target/host_management.sh@104 -- # nvmftestinit 00:16:54.416 11:11:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:54.416 11:11:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.416 11:11:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:54.416 11:11:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:54.416 11:11:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:54.416 11:11:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.416 11:11:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.416 11:11:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.416 11:11:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:54.416 11:11:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:54.416 11:11:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:54.416 11:11:14 -- common/autotest_common.sh@10 -- # set +x 00:16:59.769 11:11:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.769 11:11:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:59.769 11:11:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:59.769 11:11:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:59.769 11:11:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:59.769 11:11:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:59.769 11:11:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:59.769 11:11:20 -- nvmf/common.sh@294 -- # net_devs=() 00:16:59.769 11:11:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:59.769 11:11:20 -- nvmf/common.sh@295 -- # e810=() 00:16:59.769 11:11:20 -- nvmf/common.sh@295 -- # local -ga e810 00:16:59.769 11:11:20 -- nvmf/common.sh@296 -- # x722=() 00:16:59.769 11:11:20 -- nvmf/common.sh@296 -- # local -ga x722 00:16:59.769 11:11:20 -- nvmf/common.sh@297 -- # mlx=() 00:16:59.769 11:11:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:59.769 11:11:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.769 11:11:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:59.769 11:11:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:59.769 11:11:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:59.769 11:11:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:59.769 11:11:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:59.769 11:11:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.769 11:11:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:59.769 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:59.769 11:11:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:59.769 11:11:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.769 11:11:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:59.769 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:59.769 11:11:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:59.769 11:11:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:59.769 11:11:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.769 11:11:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.769 11:11:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.769 11:11:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.769 11:11:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:59.769 Found net devices under 0000:18:00.0: mlx_0_0 00:16:59.769 11:11:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.769 11:11:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.769 11:11:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.769 11:11:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.769 11:11:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.769 11:11:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:59.769 Found net devices under 0000:18:00.1: mlx_0_1 00:16:59.769 11:11:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.769 11:11:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:59.769 11:11:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:59.769 11:11:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:59.769 11:11:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:59.770 11:11:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:59.770 11:11:20 -- nvmf/common.sh@57 -- # uname 00:16:59.770 11:11:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:59.770 11:11:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:59.770 11:11:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:59.770 11:11:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:59.770 11:11:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:59.770 11:11:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:59.770 11:11:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:59.770 11:11:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:59.770 11:11:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:59.770 11:11:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:59.770 11:11:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:59.770 11:11:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:59.770 11:11:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:59.770 11:11:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:59.770 11:11:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:00.123 11:11:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:00.123 11:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.123 11:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.123 11:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:00.123 11:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:00.123 11:11:20 -- nvmf/common.sh@104 -- # continue 2 00:17:00.123 11:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.123 11:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.123 11:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:00.123 11:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.123 11:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:00.123 11:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:00.123 11:11:20 -- nvmf/common.sh@104 -- # continue 2 00:17:00.123 11:11:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:00.123 11:11:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:00.123 11:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:00.123 11:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:00.123 11:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.123 11:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.124 11:11:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:00.124 11:11:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:00.124 11:11:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:00.124 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:00.124 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:00.124 altname enp24s0f0np0 00:17:00.124 altname ens785f0np0 00:17:00.124 inet 192.168.100.8/24 scope global mlx_0_0 00:17:00.124 valid_lft forever preferred_lft forever 00:17:00.124 11:11:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:00.124 11:11:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.124 11:11:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:00.124 11:11:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:00.124 11:11:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:00.124 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:00.124 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:00.124 altname enp24s0f1np1 00:17:00.124 altname ens785f1np1 00:17:00.124 inet 192.168.100.9/24 scope global mlx_0_1 00:17:00.124 valid_lft forever preferred_lft forever 00:17:00.124 11:11:20 -- nvmf/common.sh@410 -- # return 0 00:17:00.124 11:11:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:00.124 11:11:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:00.124 11:11:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:00.124 11:11:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:00.124 11:11:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:00.124 11:11:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:00.124 11:11:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:00.124 11:11:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:00.124 11:11:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:00.124 11:11:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:00.124 11:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.124 11:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.124 11:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:00.124 11:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:00.124 11:11:20 -- nvmf/common.sh@104 -- # continue 2 00:17:00.124 11:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.124 11:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.124 11:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:00.124 11:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.124 11:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:00.124 11:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@104 -- # continue 2 00:17:00.124 11:11:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:00.124 11:11:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:00.124 11:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.124 11:11:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:00.124 11:11:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.124 11:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.124 11:11:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:00.124 192.168.100.9' 00:17:00.124 11:11:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:00.124 192.168.100.9' 00:17:00.124 11:11:20 -- nvmf/common.sh@445 -- # head -n 1 00:17:00.124 11:11:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:00.124 11:11:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:00.124 192.168.100.9' 00:17:00.124 11:11:20 -- nvmf/common.sh@446 -- # tail -n +2 00:17:00.124 11:11:20 -- nvmf/common.sh@446 -- # head -n 1 00:17:00.124 11:11:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:00.124 11:11:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:00.124 11:11:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:00.124 11:11:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:00.124 11:11:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:00.124 11:11:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:00.124 11:11:20 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:00.124 11:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:00.124 11:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.124 11:11:20 -- common/autotest_common.sh@10 -- # set +x 00:17:00.124 ************************************ 00:17:00.124 START TEST nvmf_host_management 00:17:00.124 ************************************ 00:17:00.124 11:11:20 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:00.124 11:11:20 -- target/host_management.sh@69 -- # starttarget 00:17:00.124 11:11:20 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:00.124 11:11:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:00.124 11:11:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.124 11:11:20 -- common/autotest_common.sh@10 -- # set +x 00:17:00.124 11:11:20 -- nvmf/common.sh@469 -- # nvmfpid=1607512 00:17:00.124 11:11:20 -- nvmf/common.sh@470 -- # waitforlisten 1607512 00:17:00.124 11:11:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:00.124 11:11:20 -- common/autotest_common.sh@829 -- # '[' -z 1607512 ']' 00:17:00.124 11:11:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.124 11:11:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.124 11:11:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.124 11:11:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.124 11:11:20 -- common/autotest_common.sh@10 -- # set +x 00:17:00.124 [2024-12-13 11:11:20.501986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:00.124 [2024-12-13 11:11:20.502036] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.124 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.124 [2024-12-13 11:11:20.557392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.124 [2024-12-13 11:11:20.628685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:00.124 [2024-12-13 11:11:20.628788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.124 [2024-12-13 11:11:20.628796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.124 [2024-12-13 11:11:20.628802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.124 [2024-12-13 11:11:20.628842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.124 [2024-12-13 11:11:20.628926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.124 [2024-12-13 11:11:20.628953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.124 [2024-12-13 11:11:20.628953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:01.062 11:11:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.062 11:11:21 -- common/autotest_common.sh@862 -- # return 0 00:17:01.062 11:11:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:01.062 11:11:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.062 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 11:11:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.062 11:11:21 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:01.062 11:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.062 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 [2024-12-13 11:11:21.355045] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x156ac50/0x156f140) succeed. 00:17:01.062 [2024-12-13 11:11:21.363155] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x156c240/0x15b07e0) succeed. 00:17:01.062 11:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.062 11:11:21 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:01.062 11:11:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.062 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 11:11:21 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:01.062 11:11:21 -- target/host_management.sh@23 -- # cat 00:17:01.062 11:11:21 -- target/host_management.sh@30 -- # rpc_cmd 00:17:01.062 11:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.062 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 Malloc0 00:17:01.062 [2024-12-13 11:11:21.524220] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.062 11:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.062 11:11:21 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:01.062 11:11:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.062 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 11:11:21 -- target/host_management.sh@73 -- # perfpid=1607820 00:17:01.062 11:11:21 -- target/host_management.sh@74 -- # waitforlisten 1607820 /var/tmp/bdevperf.sock 00:17:01.062 11:11:21 -- common/autotest_common.sh@829 -- # '[' -z 1607820 ']' 00:17:01.062 11:11:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.062 11:11:21 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:01.062 11:11:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.062 11:11:21 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:01.062 11:11:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.062 11:11:21 -- nvmf/common.sh@520 -- # config=() 00:17:01.062 11:11:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.062 11:11:21 -- nvmf/common.sh@520 -- # local subsystem config 00:17:01.062 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 11:11:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:01.062 11:11:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:01.062 { 00:17:01.062 "params": { 00:17:01.062 "name": "Nvme$subsystem", 00:17:01.062 "trtype": "$TEST_TRANSPORT", 00:17:01.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.062 "adrfam": "ipv4", 00:17:01.062 "trsvcid": "$NVMF_PORT", 00:17:01.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.062 "hdgst": ${hdgst:-false}, 00:17:01.062 "ddgst": ${ddgst:-false} 00:17:01.062 }, 00:17:01.062 "method": "bdev_nvme_attach_controller" 00:17:01.062 } 00:17:01.062 EOF 00:17:01.062 )") 00:17:01.062 11:11:21 -- nvmf/common.sh@542 -- # cat 00:17:01.062 11:11:21 -- nvmf/common.sh@544 -- # jq . 00:17:01.062 11:11:21 -- nvmf/common.sh@545 -- # IFS=, 00:17:01.062 11:11:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:01.062 "params": { 00:17:01.062 "name": "Nvme0", 00:17:01.062 "trtype": "rdma", 00:17:01.062 "traddr": "192.168.100.8", 00:17:01.062 "adrfam": "ipv4", 00:17:01.062 "trsvcid": "4420", 00:17:01.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:01.062 "hdgst": false, 00:17:01.062 "ddgst": false 00:17:01.062 }, 00:17:01.062 "method": "bdev_nvme_attach_controller" 00:17:01.062 }' 00:17:01.062 [2024-12-13 11:11:21.614117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:01.062 [2024-12-13 11:11:21.614160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607820 ] 00:17:01.321 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.321 [2024-12-13 11:11:21.666526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.321 [2024-12-13 11:11:21.732373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.580 Running I/O for 10 seconds... 00:17:02.148 11:11:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.148 11:11:22 -- common/autotest_common.sh@862 -- # return 0 00:17:02.148 11:11:22 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:02.148 11:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.148 11:11:22 -- common/autotest_common.sh@10 -- # set +x 00:17:02.148 11:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.148 11:11:22 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.148 11:11:22 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:02.148 11:11:22 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:02.148 11:11:22 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:02.148 11:11:22 -- target/host_management.sh@52 -- # local ret=1 00:17:02.148 11:11:22 -- target/host_management.sh@53 -- # local i 00:17:02.148 11:11:22 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:02.148 11:11:22 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:02.148 11:11:22 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:02.148 11:11:22 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:02.148 11:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.148 11:11:22 -- common/autotest_common.sh@10 -- # set +x 00:17:02.148 11:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.148 11:11:22 -- target/host_management.sh@55 -- # read_io_count=3201 00:17:02.148 11:11:22 -- target/host_management.sh@58 -- # '[' 3201 -ge 100 ']' 00:17:02.148 11:11:22 -- target/host_management.sh@59 -- # ret=0 00:17:02.148 11:11:22 -- target/host_management.sh@60 -- # break 00:17:02.148 11:11:22 -- target/host_management.sh@64 -- # return 0 00:17:02.148 11:11:22 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:02.148 11:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.148 11:11:22 -- common/autotest_common.sh@10 -- # set +x 00:17:02.148 11:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.148 11:11:22 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:02.148 11:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.148 11:11:22 -- common/autotest_common.sh@10 -- # set +x 00:17:02.148 11:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.148 11:11:22 -- target/host_management.sh@87 -- # sleep 1 00:17:03.086 [2024-12-13 11:11:23.484850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:17:03.086 [2024-12-13 11:11:23.484881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:17:03.086 [2024-12-13 11:11:23.484908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:03.086 [2024-12-13 11:11:23.484922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:17:03.086 [2024-12-13 11:11:23.484936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:03.086 [2024-12-13 11:11:23.484950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:03.086 [2024-12-13 11:11:23.484964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:03.086 [2024-12-13 11:11:23.484978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.484986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:17:03.086 [2024-12-13 11:11:23.484992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.485000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:03.086 [2024-12-13 11:11:23.485006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.485013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:17:03.086 [2024-12-13 11:11:23.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.086 [2024-12-13 11:11:23.485027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:03.087 [2024-12-13 11:11:23.485062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:03.087 [2024-12-13 11:11:23.485089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:17:03.087 [2024-12-13 11:11:23.485105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:17:03.087 [2024-12-13 11:11:23.485133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:17:03.087 [2024-12-13 11:11:23.485146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:03.087 [2024-12-13 11:11:23.485159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:03.087 [2024-12-13 11:11:23.485175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:03.087 [2024-12-13 11:11:23.485274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:03.087 [2024-12-13 11:11:23.485301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:03.087 [2024-12-13 11:11:23.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:17:03.087 [2024-12-13 11:11:23.485343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:17:03.087 [2024-12-13 11:11:23.485358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:03.087 [2024-12-13 11:11:23.485373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:03.087 [2024-12-13 11:11:23.485403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:17:03.087 [2024-12-13 11:11:23.485417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c924000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c756000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c777000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c798000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7b9000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x182300 00:17:03.087 [2024-12-13 11:11:23.485558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.087 [2024-12-13 11:11:23.485566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd23000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd02000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ccc0000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cff9000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfd8000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfb7000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf54000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf33000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf12000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.485790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x182300 00:17:03.088 [2024-12-13 11:11:23.485796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a0e18000 sqhd:5310 p:0 m:0 dnr:0 00:17:03.088 [2024-12-13 11:11:23.487657] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:03.088 [2024-12-13 11:11:23.488495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:03.088 task offset: 43520 on job bdev=Nvme0n1 fails 00:17:03.088 00:17:03.088 Latency(us) 00:17:03.088 [2024-12-13T10:11:23.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.088 [2024-12-13T10:11:23.657Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.088 [2024-12-13T10:11:23.657Z] Job: Nvme0n1 ended in about 1.59 seconds with error 00:17:03.088 Verification LBA range: start 0x0 length 0x400 00:17:03.088 Nvme0n1 : 1.59 2134.12 133.38 40.35 0.00 29264.57 3276.80 1019060.53 00:17:03.088 [2024-12-13T10:11:23.657Z] =================================================================================================================== 00:17:03.088 [2024-12-13T10:11:23.657Z] Total : 2134.12 133.38 40.35 0.00 29264.57 3276.80 1019060.53 00:17:03.088 [2024-12-13 11:11:23.490013] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:03.088 11:11:23 -- target/host_management.sh@91 -- # kill -9 1607820 00:17:03.088 11:11:23 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:03.088 11:11:23 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:03.088 11:11:23 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:03.088 11:11:23 -- nvmf/common.sh@520 -- # config=() 00:17:03.088 11:11:23 -- nvmf/common.sh@520 -- # local subsystem config 00:17:03.088 11:11:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:03.088 11:11:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:03.088 { 00:17:03.088 "params": { 00:17:03.088 "name": "Nvme$subsystem", 00:17:03.088 "trtype": "$TEST_TRANSPORT", 00:17:03.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.088 "adrfam": "ipv4", 00:17:03.088 "trsvcid": "$NVMF_PORT", 00:17:03.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.088 "hdgst": ${hdgst:-false}, 00:17:03.088 "ddgst": ${ddgst:-false} 00:17:03.088 }, 00:17:03.088 "method": "bdev_nvme_attach_controller" 00:17:03.088 } 00:17:03.088 EOF 00:17:03.088 )") 00:17:03.088 11:11:23 -- nvmf/common.sh@542 -- # cat 00:17:03.088 11:11:23 -- nvmf/common.sh@544 -- # jq . 00:17:03.088 11:11:23 -- nvmf/common.sh@545 -- # IFS=, 00:17:03.088 11:11:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:03.088 "params": { 00:17:03.088 "name": "Nvme0", 00:17:03.088 "trtype": "rdma", 00:17:03.088 "traddr": "192.168.100.8", 00:17:03.088 "adrfam": "ipv4", 00:17:03.088 "trsvcid": "4420", 00:17:03.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:03.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:03.088 "hdgst": false, 00:17:03.088 "ddgst": false 00:17:03.088 }, 00:17:03.088 "method": "bdev_nvme_attach_controller" 00:17:03.088 }' 00:17:03.088 [2024-12-13 11:11:23.538384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:03.088 [2024-12-13 11:11:23.538427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608108 ] 00:17:03.088 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.088 [2024-12-13 11:11:23.591045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.347 [2024-12-13 11:11:23.657149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.347 Running I/O for 1 seconds... 00:17:04.284 00:17:04.284 Latency(us) 00:17:04.284 [2024-12-13T10:11:24.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.284 [2024-12-13T10:11:24.853Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.284 Verification LBA range: start 0x0 length 0x400 00:17:04.284 Nvme0n1 : 1.00 5920.23 370.01 0.00 0.00 10647.82 503.66 24466.77 00:17:04.284 [2024-12-13T10:11:24.853Z] =================================================================================================================== 00:17:04.284 [2024-12-13T10:11:24.853Z] Total : 5920.23 370.01 0.00 0.00 10647.82 503.66 24466.77 00:17:04.543 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1607820 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:04.543 11:11:25 -- target/host_management.sh@101 -- # stoptarget 00:17:04.543 11:11:25 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:04.543 11:11:25 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:04.543 11:11:25 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:04.543 11:11:25 -- target/host_management.sh@40 -- # nvmftestfini 00:17:04.543 11:11:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:04.543 11:11:25 -- nvmf/common.sh@116 -- # sync 00:17:04.543 11:11:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:04.543 11:11:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:04.543 11:11:25 -- nvmf/common.sh@119 -- # set +e 00:17:04.543 11:11:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:04.543 11:11:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:04.543 rmmod nvme_rdma 00:17:04.543 rmmod nvme_fabrics 00:17:04.543 11:11:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:04.543 11:11:25 -- nvmf/common.sh@123 -- # set -e 00:17:04.543 11:11:25 -- nvmf/common.sh@124 -- # return 0 00:17:04.543 11:11:25 -- nvmf/common.sh@477 -- # '[' -n 1607512 ']' 00:17:04.543 11:11:25 -- nvmf/common.sh@478 -- # killprocess 1607512 00:17:04.543 11:11:25 -- common/autotest_common.sh@936 -- # '[' -z 1607512 ']' 00:17:04.543 11:11:25 -- common/autotest_common.sh@940 -- # kill -0 1607512 00:17:04.543 11:11:25 -- common/autotest_common.sh@941 -- # uname 00:17:04.543 11:11:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:04.543 11:11:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1607512 00:17:04.801 11:11:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:04.801 11:11:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:04.801 11:11:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1607512' 00:17:04.801 killing process with pid 1607512 00:17:04.801 11:11:25 -- common/autotest_common.sh@955 -- # kill 1607512 00:17:04.801 11:11:25 -- common/autotest_common.sh@960 -- # wait 1607512 00:17:05.059 [2024-12-13 11:11:25.408923] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:05.059 11:11:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:05.059 11:11:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:05.059 00:17:05.059 real 0m4.976s 00:17:05.059 user 0m22.423s 00:17:05.059 sys 0m0.822s 00:17:05.059 11:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:05.059 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:05.059 ************************************ 00:17:05.059 END TEST nvmf_host_management 00:17:05.059 ************************************ 00:17:05.059 11:11:25 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:05.059 00:17:05.059 real 0m10.877s 00:17:05.059 user 0m24.133s 00:17:05.059 sys 0m5.172s 00:17:05.059 11:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:05.059 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:05.059 ************************************ 00:17:05.059 END TEST nvmf_host_management 00:17:05.059 ************************************ 00:17:05.060 11:11:25 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:05.060 11:11:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:05.060 11:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.060 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:05.060 ************************************ 00:17:05.060 START TEST nvmf_lvol 00:17:05.060 ************************************ 00:17:05.060 11:11:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:05.060 * Looking for test storage... 00:17:05.060 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:05.060 11:11:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:05.060 11:11:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:05.060 11:11:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:05.319 11:11:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:05.319 11:11:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:05.319 11:11:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:05.319 11:11:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:05.319 11:11:25 -- scripts/common.sh@335 -- # IFS=.-: 00:17:05.319 11:11:25 -- scripts/common.sh@335 -- # read -ra ver1 00:17:05.319 11:11:25 -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.319 11:11:25 -- scripts/common.sh@336 -- # read -ra ver2 00:17:05.319 11:11:25 -- scripts/common.sh@337 -- # local 'op=<' 00:17:05.319 11:11:25 -- scripts/common.sh@339 -- # ver1_l=2 00:17:05.319 11:11:25 -- scripts/common.sh@340 -- # ver2_l=1 00:17:05.319 11:11:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:05.319 11:11:25 -- scripts/common.sh@343 -- # case "$op" in 00:17:05.319 11:11:25 -- scripts/common.sh@344 -- # : 1 00:17:05.319 11:11:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:05.319 11:11:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.319 11:11:25 -- scripts/common.sh@364 -- # decimal 1 00:17:05.319 11:11:25 -- scripts/common.sh@352 -- # local d=1 00:17:05.319 11:11:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.319 11:11:25 -- scripts/common.sh@354 -- # echo 1 00:17:05.319 11:11:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:05.319 11:11:25 -- scripts/common.sh@365 -- # decimal 2 00:17:05.319 11:11:25 -- scripts/common.sh@352 -- # local d=2 00:17:05.319 11:11:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.319 11:11:25 -- scripts/common.sh@354 -- # echo 2 00:17:05.319 11:11:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:05.319 11:11:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:05.319 11:11:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:05.319 11:11:25 -- scripts/common.sh@367 -- # return 0 00:17:05.319 11:11:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.319 11:11:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:05.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.319 --rc genhtml_branch_coverage=1 00:17:05.319 --rc genhtml_function_coverage=1 00:17:05.319 --rc genhtml_legend=1 00:17:05.319 --rc geninfo_all_blocks=1 00:17:05.319 --rc geninfo_unexecuted_blocks=1 00:17:05.319 00:17:05.319 ' 00:17:05.319 11:11:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:05.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.319 --rc genhtml_branch_coverage=1 00:17:05.319 --rc genhtml_function_coverage=1 00:17:05.319 --rc genhtml_legend=1 00:17:05.319 --rc geninfo_all_blocks=1 00:17:05.319 --rc geninfo_unexecuted_blocks=1 00:17:05.319 00:17:05.319 ' 00:17:05.319 11:11:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:05.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.319 --rc genhtml_branch_coverage=1 00:17:05.319 --rc genhtml_function_coverage=1 00:17:05.319 --rc genhtml_legend=1 00:17:05.319 --rc geninfo_all_blocks=1 00:17:05.319 --rc geninfo_unexecuted_blocks=1 00:17:05.319 00:17:05.319 ' 00:17:05.319 11:11:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:05.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.319 --rc genhtml_branch_coverage=1 00:17:05.319 --rc genhtml_function_coverage=1 00:17:05.319 --rc genhtml_legend=1 00:17:05.319 --rc geninfo_all_blocks=1 00:17:05.319 --rc geninfo_unexecuted_blocks=1 00:17:05.319 00:17:05.319 ' 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.319 11:11:25 -- nvmf/common.sh@7 -- # uname -s 00:17:05.319 11:11:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.319 11:11:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.319 11:11:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.319 11:11:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.319 11:11:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.319 11:11:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.319 11:11:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.319 11:11:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.319 11:11:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.319 11:11:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.319 11:11:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:05.319 11:11:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:05.319 11:11:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.319 11:11:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.319 11:11:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.319 11:11:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:05.319 11:11:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.319 11:11:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.319 11:11:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.319 11:11:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.319 11:11:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.319 11:11:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.319 11:11:25 -- paths/export.sh@5 -- # export PATH 00:17:05.319 11:11:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.319 11:11:25 -- nvmf/common.sh@46 -- # : 0 00:17:05.319 11:11:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:05.319 11:11:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:05.319 11:11:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:05.319 11:11:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.319 11:11:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.319 11:11:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:05.319 11:11:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:05.319 11:11:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:05.319 11:11:25 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:05.319 11:11:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:05.319 11:11:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.319 11:11:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:05.319 11:11:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:05.319 11:11:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:05.319 11:11:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.319 11:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.319 11:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.319 11:11:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:05.319 11:11:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:05.319 11:11:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:05.319 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:10.589 11:11:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.589 11:11:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:10.589 11:11:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:10.589 11:11:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:10.589 11:11:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:10.589 11:11:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:10.589 11:11:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:10.589 11:11:30 -- nvmf/common.sh@294 -- # net_devs=() 00:17:10.589 11:11:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:10.589 11:11:30 -- nvmf/common.sh@295 -- # e810=() 00:17:10.589 11:11:30 -- nvmf/common.sh@295 -- # local -ga e810 00:17:10.589 11:11:30 -- nvmf/common.sh@296 -- # x722=() 00:17:10.589 11:11:30 -- nvmf/common.sh@296 -- # local -ga x722 00:17:10.589 11:11:30 -- nvmf/common.sh@297 -- # mlx=() 00:17:10.589 11:11:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:10.589 11:11:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.589 11:11:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:10.589 11:11:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:10.589 11:11:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:10.589 11:11:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:10.589 11:11:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:10.589 11:11:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:10.589 11:11:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:10.589 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:10.589 11:11:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:10.589 11:11:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:10.589 11:11:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:10.589 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:10.589 11:11:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:10.589 11:11:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:10.589 11:11:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:10.589 11:11:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.589 11:11:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:10.589 11:11:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.589 11:11:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:10.589 Found net devices under 0000:18:00.0: mlx_0_0 00:17:10.589 11:11:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.589 11:11:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:10.589 11:11:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.589 11:11:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:10.589 11:11:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.589 11:11:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:10.589 Found net devices under 0000:18:00.1: mlx_0_1 00:17:10.589 11:11:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.589 11:11:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:10.589 11:11:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:10.589 11:11:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:10.589 11:11:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:10.589 11:11:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:10.589 11:11:30 -- nvmf/common.sh@57 -- # uname 00:17:10.589 11:11:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:10.589 11:11:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:10.589 11:11:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:10.589 11:11:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:10.589 11:11:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:10.589 11:11:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:10.589 11:11:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:10.589 11:11:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:10.589 11:11:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:10.589 11:11:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:10.589 11:11:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:10.589 11:11:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:10.589 11:11:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:10.590 11:11:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:10.590 11:11:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:10.590 11:11:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:10.590 11:11:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.590 11:11:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.590 11:11:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:10.590 11:11:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.590 11:11:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:10.590 11:11:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:10.590 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:10.590 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:10.590 altname enp24s0f0np0 00:17:10.590 altname ens785f0np0 00:17:10.590 inet 192.168.100.8/24 scope global mlx_0_0 00:17:10.590 valid_lft forever preferred_lft forever 00:17:10.590 11:11:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:10.590 11:11:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.590 11:11:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:10.590 11:11:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:10.590 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:10.590 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:10.590 altname enp24s0f1np1 00:17:10.590 altname ens785f1np1 00:17:10.590 inet 192.168.100.9/24 scope global mlx_0_1 00:17:10.590 valid_lft forever preferred_lft forever 00:17:10.590 11:11:30 -- nvmf/common.sh@410 -- # return 0 00:17:10.590 11:11:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:10.590 11:11:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:10.590 11:11:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:10.590 11:11:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:10.590 11:11:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:10.590 11:11:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:10.590 11:11:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:10.590 11:11:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:10.590 11:11:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:10.590 11:11:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.590 11:11:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:10.590 11:11:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:10.590 11:11:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@104 -- # continue 2 00:17:10.590 11:11:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:10.590 11:11:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.590 11:11:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:10.590 11:11:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:10.590 11:11:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:10.590 11:11:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:10.590 192.168.100.9' 00:17:10.590 11:11:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:10.590 192.168.100.9' 00:17:10.590 11:11:30 -- nvmf/common.sh@445 -- # head -n 1 00:17:10.590 11:11:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:10.590 11:11:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:10.590 192.168.100.9' 00:17:10.590 11:11:30 -- nvmf/common.sh@446 -- # head -n 1 00:17:10.590 11:11:30 -- nvmf/common.sh@446 -- # tail -n +2 00:17:10.590 11:11:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:10.590 11:11:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:10.590 11:11:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:10.590 11:11:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:10.590 11:11:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:10.590 11:11:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:10.590 11:11:30 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:10.590 11:11:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:10.590 11:11:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.590 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.590 11:11:30 -- nvmf/common.sh@469 -- # nvmfpid=1611655 00:17:10.590 11:11:30 -- nvmf/common.sh@470 -- # waitforlisten 1611655 00:17:10.590 11:11:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:10.590 11:11:30 -- common/autotest_common.sh@829 -- # '[' -z 1611655 ']' 00:17:10.590 11:11:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.590 11:11:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.590 11:11:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.590 11:11:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.590 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.590 [2024-12-13 11:11:30.720046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:10.590 [2024-12-13 11:11:30.720091] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.590 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.590 [2024-12-13 11:11:30.773758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.590 [2024-12-13 11:11:30.841871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:10.590 [2024-12-13 11:11:30.841980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.590 [2024-12-13 11:11:30.841987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.590 [2024-12-13 11:11:30.841993] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.590 [2024-12-13 11:11:30.842081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.590 [2024-12-13 11:11:30.842097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.590 [2024-12-13 11:11:30.842099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.157 11:11:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.157 11:11:31 -- common/autotest_common.sh@862 -- # return 0 00:17:11.157 11:11:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.157 11:11:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.157 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.157 11:11:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.157 11:11:31 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:11.157 [2024-12-13 11:11:31.707899] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfcde40/0xfd2330) succeed. 00:17:11.158 [2024-12-13 11:11:31.715914] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfcf390/0x10139d0) succeed. 00:17:11.416 11:11:31 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.674 11:11:32 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:11.674 11:11:32 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.674 11:11:32 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:11.674 11:11:32 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:11.933 11:11:32 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:12.192 11:11:32 -- target/nvmf_lvol.sh@29 -- # lvs=f68dd7a0-0aac-4d98-ba26-42e364bc1330 00:17:12.192 11:11:32 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f68dd7a0-0aac-4d98-ba26-42e364bc1330 lvol 20 00:17:12.192 11:11:32 -- target/nvmf_lvol.sh@32 -- # lvol=037fdd23-0ad3-4dea-a1f9-a74c29a13aa1 00:17:12.192 11:11:32 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:12.451 11:11:32 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 037fdd23-0ad3-4dea-a1f9-a74c29a13aa1 00:17:12.709 11:11:33 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:12.709 [2024-12-13 11:11:33.217935] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:12.709 11:11:33 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:12.968 11:11:33 -- target/nvmf_lvol.sh@42 -- # perf_pid=1612224 00:17:12.968 11:11:33 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:12.968 11:11:33 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:12.968 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.904 11:11:34 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 037fdd23-0ad3-4dea-a1f9-a74c29a13aa1 MY_SNAPSHOT 00:17:14.163 11:11:34 -- target/nvmf_lvol.sh@47 -- # snapshot=68b28a23-42eb-48ba-9904-cbbf40843272 00:17:14.163 11:11:34 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 037fdd23-0ad3-4dea-a1f9-a74c29a13aa1 30 00:17:14.422 11:11:34 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 68b28a23-42eb-48ba-9904-cbbf40843272 MY_CLONE 00:17:14.422 11:11:34 -- target/nvmf_lvol.sh@49 -- # clone=33f6ae42-047b-4528-8bf2-cd397a250f6f 00:17:14.422 11:11:34 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 33f6ae42-047b-4528-8bf2-cd397a250f6f 00:17:14.679 11:11:35 -- target/nvmf_lvol.sh@53 -- # wait 1612224 00:17:24.657 Initializing NVMe Controllers 00:17:24.657 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:24.657 Controller IO queue size 128, less than required. 00:17:24.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:24.657 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:24.657 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:24.657 Initialization complete. Launching workers. 00:17:24.657 ======================================================== 00:17:24.657 Latency(us) 00:17:24.657 Device Information : IOPS MiB/s Average min max 00:17:24.657 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 18164.60 70.96 7048.26 1660.49 41762.32 00:17:24.657 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18131.20 70.83 7061.27 3206.10 42208.69 00:17:24.657 ======================================================== 00:17:24.657 Total : 36295.80 141.78 7054.75 1660.49 42208.69 00:17:24.657 00:17:24.657 11:11:44 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:24.657 11:11:44 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 037fdd23-0ad3-4dea-a1f9-a74c29a13aa1 00:17:24.657 11:11:45 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f68dd7a0-0aac-4d98-ba26-42e364bc1330 00:17:24.916 11:11:45 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:24.916 11:11:45 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:24.916 11:11:45 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:24.916 11:11:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:24.916 11:11:45 -- nvmf/common.sh@116 -- # sync 00:17:24.916 11:11:45 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:24.916 11:11:45 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:24.916 11:11:45 -- nvmf/common.sh@119 -- # set +e 00:17:24.916 11:11:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:24.916 11:11:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:24.916 rmmod nvme_rdma 00:17:24.916 rmmod nvme_fabrics 00:17:24.916 11:11:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:24.916 11:11:45 -- nvmf/common.sh@123 -- # set -e 00:17:24.916 11:11:45 -- nvmf/common.sh@124 -- # return 0 00:17:24.916 11:11:45 -- nvmf/common.sh@477 -- # '[' -n 1611655 ']' 00:17:24.916 11:11:45 -- nvmf/common.sh@478 -- # killprocess 1611655 00:17:24.916 11:11:45 -- common/autotest_common.sh@936 -- # '[' -z 1611655 ']' 00:17:24.916 11:11:45 -- common/autotest_common.sh@940 -- # kill -0 1611655 00:17:24.916 11:11:45 -- common/autotest_common.sh@941 -- # uname 00:17:24.916 11:11:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.916 11:11:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611655 00:17:24.916 11:11:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:24.916 11:11:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:24.916 11:11:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611655' 00:17:24.916 killing process with pid 1611655 00:17:24.916 11:11:45 -- common/autotest_common.sh@955 -- # kill 1611655 00:17:24.916 11:11:45 -- common/autotest_common.sh@960 -- # wait 1611655 00:17:25.175 11:11:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:25.175 11:11:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:25.175 00:17:25.175 real 0m20.215s 00:17:25.175 user 1m10.249s 00:17:25.175 sys 0m4.819s 00:17:25.175 11:11:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:25.175 11:11:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.175 ************************************ 00:17:25.175 END TEST nvmf_lvol 00:17:25.175 ************************************ 00:17:25.434 11:11:45 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:25.434 11:11:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:25.434 11:11:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.434 11:11:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.434 ************************************ 00:17:25.434 START TEST nvmf_lvs_grow 00:17:25.434 ************************************ 00:17:25.434 11:11:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:25.434 * Looking for test storage... 00:17:25.434 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:25.434 11:11:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:25.434 11:11:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:25.434 11:11:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:25.434 11:11:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:25.434 11:11:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:25.434 11:11:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:25.434 11:11:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:25.434 11:11:45 -- scripts/common.sh@335 -- # IFS=.-: 00:17:25.434 11:11:45 -- scripts/common.sh@335 -- # read -ra ver1 00:17:25.434 11:11:45 -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.434 11:11:45 -- scripts/common.sh@336 -- # read -ra ver2 00:17:25.434 11:11:45 -- scripts/common.sh@337 -- # local 'op=<' 00:17:25.434 11:11:45 -- scripts/common.sh@339 -- # ver1_l=2 00:17:25.434 11:11:45 -- scripts/common.sh@340 -- # ver2_l=1 00:17:25.434 11:11:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:25.434 11:11:45 -- scripts/common.sh@343 -- # case "$op" in 00:17:25.434 11:11:45 -- scripts/common.sh@344 -- # : 1 00:17:25.434 11:11:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:25.434 11:11:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.434 11:11:45 -- scripts/common.sh@364 -- # decimal 1 00:17:25.434 11:11:45 -- scripts/common.sh@352 -- # local d=1 00:17:25.434 11:11:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.434 11:11:45 -- scripts/common.sh@354 -- # echo 1 00:17:25.434 11:11:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:25.434 11:11:45 -- scripts/common.sh@365 -- # decimal 2 00:17:25.434 11:11:45 -- scripts/common.sh@352 -- # local d=2 00:17:25.434 11:11:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.434 11:11:45 -- scripts/common.sh@354 -- # echo 2 00:17:25.434 11:11:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:25.434 11:11:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:25.434 11:11:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:25.434 11:11:45 -- scripts/common.sh@367 -- # return 0 00:17:25.434 11:11:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.434 11:11:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.434 --rc genhtml_branch_coverage=1 00:17:25.434 --rc genhtml_function_coverage=1 00:17:25.434 --rc genhtml_legend=1 00:17:25.434 --rc geninfo_all_blocks=1 00:17:25.434 --rc geninfo_unexecuted_blocks=1 00:17:25.434 00:17:25.434 ' 00:17:25.434 11:11:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.434 --rc genhtml_branch_coverage=1 00:17:25.434 --rc genhtml_function_coverage=1 00:17:25.434 --rc genhtml_legend=1 00:17:25.434 --rc geninfo_all_blocks=1 00:17:25.434 --rc geninfo_unexecuted_blocks=1 00:17:25.434 00:17:25.434 ' 00:17:25.434 11:11:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.434 --rc genhtml_branch_coverage=1 00:17:25.434 --rc genhtml_function_coverage=1 00:17:25.434 --rc genhtml_legend=1 00:17:25.434 --rc geninfo_all_blocks=1 00:17:25.434 --rc geninfo_unexecuted_blocks=1 00:17:25.434 00:17:25.434 ' 00:17:25.434 11:11:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:25.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.434 --rc genhtml_branch_coverage=1 00:17:25.434 --rc genhtml_function_coverage=1 00:17:25.434 --rc genhtml_legend=1 00:17:25.434 --rc geninfo_all_blocks=1 00:17:25.434 --rc geninfo_unexecuted_blocks=1 00:17:25.434 00:17:25.434 ' 00:17:25.434 11:11:45 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.434 11:11:45 -- nvmf/common.sh@7 -- # uname -s 00:17:25.434 11:11:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.434 11:11:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.434 11:11:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.434 11:11:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.434 11:11:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.434 11:11:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.434 11:11:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.434 11:11:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.434 11:11:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.434 11:11:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.434 11:11:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:25.434 11:11:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:25.434 11:11:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.434 11:11:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.434 11:11:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.434 11:11:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:25.434 11:11:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.434 11:11:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.434 11:11:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.435 11:11:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.435 11:11:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.435 11:11:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.435 11:11:45 -- paths/export.sh@5 -- # export PATH 00:17:25.435 11:11:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.435 11:11:45 -- nvmf/common.sh@46 -- # : 0 00:17:25.435 11:11:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:25.435 11:11:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:25.435 11:11:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:25.435 11:11:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.435 11:11:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.435 11:11:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:25.435 11:11:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:25.435 11:11:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:25.435 11:11:45 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:25.435 11:11:45 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:25.435 11:11:45 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:25.435 11:11:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:25.435 11:11:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.435 11:11:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:25.435 11:11:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:25.435 11:11:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:25.435 11:11:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.435 11:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.435 11:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.435 11:11:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:25.435 11:11:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:25.435 11:11:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:25.435 11:11:45 -- common/autotest_common.sh@10 -- # set +x 00:17:30.714 11:11:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:30.714 11:11:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:30.714 11:11:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:30.714 11:11:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:30.714 11:11:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:30.714 11:11:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:30.714 11:11:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:30.714 11:11:51 -- nvmf/common.sh@294 -- # net_devs=() 00:17:30.714 11:11:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:30.714 11:11:51 -- nvmf/common.sh@295 -- # e810=() 00:17:30.714 11:11:51 -- nvmf/common.sh@295 -- # local -ga e810 00:17:30.714 11:11:51 -- nvmf/common.sh@296 -- # x722=() 00:17:30.714 11:11:51 -- nvmf/common.sh@296 -- # local -ga x722 00:17:30.714 11:11:51 -- nvmf/common.sh@297 -- # mlx=() 00:17:30.714 11:11:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:30.714 11:11:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.714 11:11:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:30.714 11:11:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:30.714 11:11:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:30.714 11:11:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:30.714 11:11:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:30.714 11:11:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:30.714 11:11:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:30.714 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:30.714 11:11:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:30.714 11:11:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:30.714 11:11:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:30.714 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:30.714 11:11:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:30.714 11:11:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:30.714 11:11:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:30.714 11:11:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.714 11:11:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:30.714 11:11:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.714 11:11:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:30.714 Found net devices under 0000:18:00.0: mlx_0_0 00:17:30.714 11:11:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.714 11:11:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:30.714 11:11:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.714 11:11:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:30.714 11:11:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.714 11:11:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:30.714 Found net devices under 0000:18:00.1: mlx_0_1 00:17:30.714 11:11:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.714 11:11:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:30.714 11:11:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:30.714 11:11:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:30.714 11:11:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:30.715 11:11:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:30.715 11:11:51 -- nvmf/common.sh@57 -- # uname 00:17:30.715 11:11:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:30.715 11:11:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:30.715 11:11:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:30.715 11:11:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:30.715 11:11:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:30.715 11:11:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:30.715 11:11:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:30.715 11:11:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:30.715 11:11:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:30.715 11:11:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:30.715 11:11:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:30.715 11:11:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:30.715 11:11:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:30.715 11:11:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:30.715 11:11:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:30.715 11:11:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:30.715 11:11:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:30.715 11:11:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.715 11:11:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:30.715 11:11:51 -- nvmf/common.sh@104 -- # continue 2 00:17:30.715 11:11:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:30.715 11:11:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.715 11:11:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.715 11:11:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:30.715 11:11:51 -- nvmf/common.sh@104 -- # continue 2 00:17:30.715 11:11:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:30.715 11:11:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:30.715 11:11:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:30.715 11:11:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:30.715 11:11:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:30.715 11:11:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:30.715 11:11:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:30.715 11:11:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:30.715 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:30.715 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:30.715 altname enp24s0f0np0 00:17:30.715 altname ens785f0np0 00:17:30.715 inet 192.168.100.8/24 scope global mlx_0_0 00:17:30.715 valid_lft forever preferred_lft forever 00:17:30.715 11:11:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:30.715 11:11:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:30.715 11:11:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:30.715 11:11:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:30.715 11:11:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:30.715 11:11:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:30.715 11:11:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:30.715 11:11:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:30.715 11:11:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:30.715 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:30.715 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:30.715 altname enp24s0f1np1 00:17:30.715 altname ens785f1np1 00:17:30.715 inet 192.168.100.9/24 scope global mlx_0_1 00:17:30.715 valid_lft forever preferred_lft forever 00:17:30.715 11:11:51 -- nvmf/common.sh@410 -- # return 0 00:17:30.715 11:11:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:30.715 11:11:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:30.715 11:11:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:30.974 11:11:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:30.974 11:11:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:30.974 11:11:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:30.974 11:11:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:30.974 11:11:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:30.974 11:11:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:30.974 11:11:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:30.974 11:11:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:30.974 11:11:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.974 11:11:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:30.974 11:11:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:30.974 11:11:51 -- nvmf/common.sh@104 -- # continue 2 00:17:30.974 11:11:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:30.974 11:11:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.974 11:11:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:30.974 11:11:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.974 11:11:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:30.974 11:11:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:30.974 11:11:51 -- nvmf/common.sh@104 -- # continue 2 00:17:30.974 11:11:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:30.974 11:11:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:30.974 11:11:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:30.974 11:11:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:30.974 11:11:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:30.974 11:11:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:30.974 11:11:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:30.974 11:11:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:30.975 11:11:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:30.975 11:11:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:30.975 11:11:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:30.975 11:11:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:30.975 11:11:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:30.975 192.168.100.9' 00:17:30.975 11:11:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:30.975 192.168.100.9' 00:17:30.975 11:11:51 -- nvmf/common.sh@445 -- # head -n 1 00:17:30.975 11:11:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:30.975 11:11:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:30.975 192.168.100.9' 00:17:30.975 11:11:51 -- nvmf/common.sh@446 -- # tail -n +2 00:17:30.975 11:11:51 -- nvmf/common.sh@446 -- # head -n 1 00:17:30.975 11:11:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:30.975 11:11:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:30.975 11:11:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:30.975 11:11:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:30.975 11:11:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:30.975 11:11:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:30.975 11:11:51 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:30.975 11:11:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:30.975 11:11:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.975 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:17:30.975 11:11:51 -- nvmf/common.sh@469 -- # nvmfpid=1617621 00:17:30.975 11:11:51 -- nvmf/common.sh@470 -- # waitforlisten 1617621 00:17:30.975 11:11:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:30.975 11:11:51 -- common/autotest_common.sh@829 -- # '[' -z 1617621 ']' 00:17:30.975 11:11:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.975 11:11:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.975 11:11:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.975 11:11:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.975 11:11:51 -- common/autotest_common.sh@10 -- # set +x 00:17:30.975 [2024-12-13 11:11:51.415708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.975 [2024-12-13 11:11:51.415748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.975 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.975 [2024-12-13 11:11:51.467238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.975 [2024-12-13 11:11:51.536244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.975 [2024-12-13 11:11:51.536391] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.975 [2024-12-13 11:11:51.536399] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.975 [2024-12-13 11:11:51.536405] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.975 [2024-12-13 11:11:51.536426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.912 11:11:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.912 11:11:52 -- common/autotest_common.sh@862 -- # return 0 00:17:31.912 11:11:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:31.912 11:11:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.912 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.912 11:11:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:31.912 [2024-12-13 11:11:52.396042] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa987e0/0xa9ccd0) succeed. 00:17:31.912 [2024-12-13 11:11:52.403583] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa99ce0/0xade370) succeed. 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:31.912 11:11:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:31.912 11:11:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.912 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:17:31.912 ************************************ 00:17:31.912 START TEST lvs_grow_clean 00:17:31.912 ************************************ 00:17:31.912 11:11:52 -- common/autotest_common.sh@1114 -- # lvs_grow 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.912 11:11:52 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:32.171 11:11:52 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:32.171 11:11:52 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:32.431 11:11:52 -- target/nvmf_lvs_grow.sh@28 -- # lvs=40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:32.431 11:11:52 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:32.431 11:11:52 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:32.431 11:11:52 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:32.431 11:11:52 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:32.431 11:11:52 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 lvol 150 00:17:32.689 11:11:53 -- target/nvmf_lvs_grow.sh@33 -- # lvol=74c63e18-ac94-44d8-b617-0bcec6d9ac0c 00:17:32.689 11:11:53 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.689 11:11:53 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:32.948 [2024-12-13 11:11:53.298407] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:32.948 [2024-12-13 11:11:53.298454] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:32.948 true 00:17:32.948 11:11:53 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:32.948 11:11:53 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:32.948 11:11:53 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:32.948 11:11:53 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:33.207 11:11:53 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74c63e18-ac94-44d8-b617-0bcec6d9ac0c 00:17:33.465 11:11:53 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:33.465 [2024-12-13 11:11:53.928506] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:33.465 11:11:53 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:33.725 11:11:54 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1618192 00:17:33.725 11:11:54 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.725 11:11:54 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1618192 /var/tmp/bdevperf.sock 00:17:33.725 11:11:54 -- common/autotest_common.sh@829 -- # '[' -z 1618192 ']' 00:17:33.725 11:11:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.725 11:11:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.725 11:11:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.725 11:11:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.725 11:11:54 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:33.725 11:11:54 -- common/autotest_common.sh@10 -- # set +x 00:17:33.725 [2024-12-13 11:11:54.143592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:33.725 [2024-12-13 11:11:54.143634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618192 ] 00:17:33.725 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.725 [2024-12-13 11:11:54.192736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.725 [2024-12-13 11:11:54.257237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.662 11:11:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.662 11:11:54 -- common/autotest_common.sh@862 -- # return 0 00:17:34.662 11:11:54 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:34.662 Nvme0n1 00:17:34.662 11:11:55 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:34.921 [ 00:17:34.921 { 00:17:34.921 "name": "Nvme0n1", 00:17:34.921 "aliases": [ 00:17:34.921 "74c63e18-ac94-44d8-b617-0bcec6d9ac0c" 00:17:34.921 ], 00:17:34.921 "product_name": "NVMe disk", 00:17:34.921 "block_size": 4096, 00:17:34.921 "num_blocks": 38912, 00:17:34.921 "uuid": "74c63e18-ac94-44d8-b617-0bcec6d9ac0c", 00:17:34.921 "assigned_rate_limits": { 00:17:34.921 "rw_ios_per_sec": 0, 00:17:34.921 "rw_mbytes_per_sec": 0, 00:17:34.921 "r_mbytes_per_sec": 0, 00:17:34.921 "w_mbytes_per_sec": 0 00:17:34.921 }, 00:17:34.921 "claimed": false, 00:17:34.921 "zoned": false, 00:17:34.921 "supported_io_types": { 00:17:34.921 "read": true, 00:17:34.921 "write": true, 00:17:34.921 "unmap": true, 00:17:34.921 "write_zeroes": true, 00:17:34.921 "flush": true, 00:17:34.921 "reset": true, 00:17:34.921 "compare": true, 00:17:34.921 "compare_and_write": true, 00:17:34.921 "abort": true, 00:17:34.921 "nvme_admin": true, 00:17:34.921 "nvme_io": true 00:17:34.921 }, 00:17:34.921 "memory_domains": [ 00:17:34.921 { 00:17:34.921 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:34.921 "dma_device_type": 0 00:17:34.921 } 00:17:34.921 ], 00:17:34.921 "driver_specific": { 00:17:34.921 "nvme": [ 00:17:34.921 { 00:17:34.921 "trid": { 00:17:34.921 "trtype": "RDMA", 00:17:34.921 "adrfam": "IPv4", 00:17:34.921 "traddr": "192.168.100.8", 00:17:34.921 "trsvcid": "4420", 00:17:34.921 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:34.921 }, 00:17:34.921 "ctrlr_data": { 00:17:34.921 "cntlid": 1, 00:17:34.921 "vendor_id": "0x8086", 00:17:34.921 "model_number": "SPDK bdev Controller", 00:17:34.921 "serial_number": "SPDK0", 00:17:34.921 "firmware_revision": "24.01.1", 00:17:34.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:34.921 "oacs": { 00:17:34.921 "security": 0, 00:17:34.921 "format": 0, 00:17:34.921 "firmware": 0, 00:17:34.921 "ns_manage": 0 00:17:34.921 }, 00:17:34.921 "multi_ctrlr": true, 00:17:34.921 "ana_reporting": false 00:17:34.921 }, 00:17:34.921 "vs": { 00:17:34.921 "nvme_version": "1.3" 00:17:34.921 }, 00:17:34.921 "ns_data": { 00:17:34.921 "id": 1, 00:17:34.921 "can_share": true 00:17:34.921 } 00:17:34.921 } 00:17:34.921 ], 00:17:34.921 "mp_policy": "active_passive" 00:17:34.921 } 00:17:34.921 } 00:17:34.921 ] 00:17:34.921 11:11:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1618462 00:17:34.921 11:11:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:34.921 11:11:55 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.921 Running I/O for 10 seconds... 00:17:35.856 Latency(us) 00:17:35.856 [2024-12-13T10:11:56.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.856 [2024-12-13T10:11:56.425Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.856 Nvme0n1 : 1.00 39392.00 153.88 0.00 0.00 0.00 0.00 0.00 00:17:35.856 [2024-12-13T10:11:56.425Z] =================================================================================================================== 00:17:35.856 [2024-12-13T10:11:56.425Z] Total : 39392.00 153.88 0.00 0.00 0.00 0.00 0.00 00:17:35.856 00:17:36.793 11:11:57 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:37.051 [2024-12-13T10:11:57.620Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.051 Nvme0n1 : 2.00 39664.50 154.94 0.00 0.00 0.00 0.00 0.00 00:17:37.051 [2024-12-13T10:11:57.620Z] =================================================================================================================== 00:17:37.051 [2024-12-13T10:11:57.620Z] Total : 39664.50 154.94 0.00 0.00 0.00 0.00 0.00 00:17:37.051 00:17:37.051 true 00:17:37.051 11:11:57 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:37.052 11:11:57 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:37.310 11:11:57 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:37.310 11:11:57 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:37.310 11:11:57 -- target/nvmf_lvs_grow.sh@65 -- # wait 1618462 00:17:37.878 [2024-12-13T10:11:58.447Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.878 Nvme0n1 : 3.00 39754.67 155.29 0.00 0.00 0.00 0.00 0.00 00:17:37.878 [2024-12-13T10:11:58.447Z] =================================================================================================================== 00:17:37.878 [2024-12-13T10:11:58.447Z] Total : 39754.67 155.29 0.00 0.00 0.00 0.00 0.00 00:17:37.878 00:17:39.256 [2024-12-13T10:11:59.825Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.256 Nvme0n1 : 4.00 39840.75 155.63 0.00 0.00 0.00 0.00 0.00 00:17:39.256 [2024-12-13T10:11:59.825Z] =================================================================================================================== 00:17:39.256 [2024-12-13T10:11:59.825Z] Total : 39840.75 155.63 0.00 0.00 0.00 0.00 0.00 00:17:39.256 00:17:40.191 [2024-12-13T10:12:00.760Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.191 Nvme0n1 : 5.00 39794.00 155.45 0.00 0.00 0.00 0.00 0.00 00:17:40.191 [2024-12-13T10:12:00.760Z] =================================================================================================================== 00:17:40.191 [2024-12-13T10:12:00.760Z] Total : 39794.00 155.45 0.00 0.00 0.00 0.00 0.00 00:17:40.191 00:17:41.126 [2024-12-13T10:12:01.695Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.126 Nvme0n1 : 6.00 39760.67 155.32 0.00 0.00 0.00 0.00 0.00 00:17:41.126 [2024-12-13T10:12:01.695Z] =================================================================================================================== 00:17:41.126 [2024-12-13T10:12:01.695Z] Total : 39760.67 155.32 0.00 0.00 0.00 0.00 0.00 00:17:41.126 00:17:42.063 [2024-12-13T10:12:02.632Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.063 Nvme0n1 : 7.00 39812.29 155.52 0.00 0.00 0.00 0.00 0.00 00:17:42.063 [2024-12-13T10:12:02.632Z] =================================================================================================================== 00:17:42.063 [2024-12-13T10:12:02.632Z] Total : 39812.29 155.52 0.00 0.00 0.00 0.00 0.00 00:17:42.063 00:17:42.999 [2024-12-13T10:12:03.569Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.000 Nvme0n1 : 8.00 39859.62 155.70 0.00 0.00 0.00 0.00 0.00 00:17:43.000 [2024-12-13T10:12:03.569Z] =================================================================================================================== 00:17:43.000 [2024-12-13T10:12:03.569Z] Total : 39859.62 155.70 0.00 0.00 0.00 0.00 0.00 00:17:43.000 00:17:43.936 [2024-12-13T10:12:04.505Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.936 Nvme0n1 : 9.00 39871.78 155.75 0.00 0.00 0.00 0.00 0.00 00:17:43.936 [2024-12-13T10:12:04.505Z] =================================================================================================================== 00:17:43.936 [2024-12-13T10:12:04.505Z] Total : 39871.78 155.75 0.00 0.00 0.00 0.00 0.00 00:17:43.936 00:17:44.871 [2024-12-13T10:12:05.440Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.871 Nvme0n1 : 10.00 39849.20 155.66 0.00 0.00 0.00 0.00 0.00 00:17:44.871 [2024-12-13T10:12:05.440Z] =================================================================================================================== 00:17:44.871 [2024-12-13T10:12:05.440Z] Total : 39849.20 155.66 0.00 0.00 0.00 0.00 0.00 00:17:44.871 00:17:44.871 00:17:44.871 Latency(us) 00:17:44.871 [2024-12-13T10:12:05.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.871 [2024-12-13T10:12:05.440Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.871 Nvme0n1 : 10.00 39847.51 155.65 0.00 0.00 3209.76 2415.12 7718.68 00:17:44.871 [2024-12-13T10:12:05.440Z] =================================================================================================================== 00:17:44.871 [2024-12-13T10:12:05.441Z] Total : 39847.51 155.65 0.00 0.00 3209.76 2415.12 7718.68 00:17:44.872 0 00:17:44.872 11:12:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1618192 00:17:44.872 11:12:05 -- common/autotest_common.sh@936 -- # '[' -z 1618192 ']' 00:17:44.872 11:12:05 -- common/autotest_common.sh@940 -- # kill -0 1618192 00:17:44.872 11:12:05 -- common/autotest_common.sh@941 -- # uname 00:17:44.872 11:12:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.130 11:12:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1618192 00:17:45.130 11:12:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.130 11:12:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.130 11:12:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1618192' 00:17:45.130 killing process with pid 1618192 00:17:45.130 11:12:05 -- common/autotest_common.sh@955 -- # kill 1618192 00:17:45.130 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.130 00:17:45.130 Latency(us) 00:17:45.130 [2024-12-13T10:12:05.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.130 [2024-12-13T10:12:05.699Z] =================================================================================================================== 00:17:45.130 [2024-12-13T10:12:05.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.130 11:12:05 -- common/autotest_common.sh@960 -- # wait 1618192 00:17:45.130 11:12:05 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:45.389 11:12:05 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:45.389 11:12:05 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:45.648 11:12:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:45.648 11:12:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:45.648 11:12:06 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.648 [2024-12-13 11:12:06.196915] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:45.907 11:12:06 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:45.907 11:12:06 -- common/autotest_common.sh@650 -- # local es=0 00:17:45.908 11:12:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:45.908 11:12:06 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:45.908 11:12:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.908 11:12:06 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:45.908 11:12:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.908 11:12:06 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:45.908 11:12:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.908 11:12:06 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:45.908 11:12:06 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:17:45.908 11:12:06 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:45.908 request: 00:17:45.908 { 00:17:45.908 "uuid": "40ef848c-966b-437f-9cdb-93d3f6ddb1a9", 00:17:45.908 "method": "bdev_lvol_get_lvstores", 00:17:45.908 "req_id": 1 00:17:45.908 } 00:17:45.908 Got JSON-RPC error response 00:17:45.908 response: 00:17:45.908 { 00:17:45.908 "code": -19, 00:17:45.908 "message": "No such device" 00:17:45.908 } 00:17:45.908 11:12:06 -- common/autotest_common.sh@653 -- # es=1 00:17:45.908 11:12:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.908 11:12:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.908 11:12:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.908 11:12:06 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:46.166 aio_bdev 00:17:46.166 11:12:06 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 74c63e18-ac94-44d8-b617-0bcec6d9ac0c 00:17:46.166 11:12:06 -- common/autotest_common.sh@897 -- # local bdev_name=74c63e18-ac94-44d8-b617-0bcec6d9ac0c 00:17:46.166 11:12:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:46.166 11:12:06 -- common/autotest_common.sh@899 -- # local i 00:17:46.166 11:12:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:46.166 11:12:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:46.166 11:12:06 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.166 11:12:06 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74c63e18-ac94-44d8-b617-0bcec6d9ac0c -t 2000 00:17:46.426 [ 00:17:46.426 { 00:17:46.426 "name": "74c63e18-ac94-44d8-b617-0bcec6d9ac0c", 00:17:46.426 "aliases": [ 00:17:46.426 "lvs/lvol" 00:17:46.426 ], 00:17:46.426 "product_name": "Logical Volume", 00:17:46.426 "block_size": 4096, 00:17:46.426 "num_blocks": 38912, 00:17:46.426 "uuid": "74c63e18-ac94-44d8-b617-0bcec6d9ac0c", 00:17:46.426 "assigned_rate_limits": { 00:17:46.426 "rw_ios_per_sec": 0, 00:17:46.426 "rw_mbytes_per_sec": 0, 00:17:46.426 "r_mbytes_per_sec": 0, 00:17:46.426 "w_mbytes_per_sec": 0 00:17:46.426 }, 00:17:46.426 "claimed": false, 00:17:46.426 "zoned": false, 00:17:46.426 "supported_io_types": { 00:17:46.426 "read": true, 00:17:46.426 "write": true, 00:17:46.426 "unmap": true, 00:17:46.426 "write_zeroes": true, 00:17:46.426 "flush": false, 00:17:46.426 "reset": true, 00:17:46.426 "compare": false, 00:17:46.426 "compare_and_write": false, 00:17:46.426 "abort": false, 00:17:46.426 "nvme_admin": false, 00:17:46.426 "nvme_io": false 00:17:46.426 }, 00:17:46.426 "driver_specific": { 00:17:46.426 "lvol": { 00:17:46.426 "lvol_store_uuid": "40ef848c-966b-437f-9cdb-93d3f6ddb1a9", 00:17:46.426 "base_bdev": "aio_bdev", 00:17:46.426 "thin_provision": false, 00:17:46.426 "snapshot": false, 00:17:46.426 "clone": false, 00:17:46.426 "esnap_clone": false 00:17:46.426 } 00:17:46.426 } 00:17:46.426 } 00:17:46.426 ] 00:17:46.426 11:12:06 -- common/autotest_common.sh@905 -- # return 0 00:17:46.426 11:12:06 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:46.426 11:12:06 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:46.685 11:12:07 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:46.685 11:12:07 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:46.685 11:12:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:46.685 11:12:07 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:46.685 11:12:07 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74c63e18-ac94-44d8-b617-0bcec6d9ac0c 00:17:46.944 11:12:07 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40ef848c-966b-437f-9cdb-93d3f6ddb1a9 00:17:46.944 11:12:07 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.202 00:17:47.202 real 0m15.240s 00:17:47.202 user 0m15.335s 00:17:47.202 sys 0m0.877s 00:17:47.202 11:12:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:47.202 11:12:07 -- common/autotest_common.sh@10 -- # set +x 00:17:47.202 ************************************ 00:17:47.202 END TEST lvs_grow_clean 00:17:47.202 ************************************ 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:47.202 11:12:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:47.202 11:12:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.202 11:12:07 -- common/autotest_common.sh@10 -- # set +x 00:17:47.202 ************************************ 00:17:47.202 START TEST lvs_grow_dirty 00:17:47.202 ************************************ 00:17:47.202 11:12:07 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.202 11:12:07 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.461 11:12:07 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:47.461 11:12:07 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:47.719 11:12:08 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:17:47.719 11:12:08 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:17:47.719 11:12:08 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:47.719 11:12:08 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:47.719 11:12:08 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:47.719 11:12:08 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 lvol 150 00:17:47.978 11:12:08 -- target/nvmf_lvs_grow.sh@33 -- # lvol=219cfdf3-fd47-46b5-b83b-033e33af625a 00:17:47.978 11:12:08 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.978 11:12:08 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:48.238 [2024-12-13 11:12:08.580861] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:48.238 [2024-12-13 11:12:08.580911] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:48.238 true 00:17:48.238 11:12:08 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:17:48.238 11:12:08 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:48.238 11:12:08 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:48.238 11:12:08 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:48.496 11:12:08 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 219cfdf3-fd47-46b5-b83b-033e33af625a 00:17:48.755 11:12:09 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:48.755 11:12:09 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:49.014 11:12:09 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:49.014 11:12:09 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1621551 00:17:49.014 11:12:09 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.014 11:12:09 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1621551 /var/tmp/bdevperf.sock 00:17:49.014 11:12:09 -- common/autotest_common.sh@829 -- # '[' -z 1621551 ']' 00:17:49.014 11:12:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.014 11:12:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.015 11:12:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.015 11:12:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.015 11:12:09 -- common/autotest_common.sh@10 -- # set +x 00:17:49.015 [2024-12-13 11:12:09.419480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:49.015 [2024-12-13 11:12:09.419526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621551 ] 00:17:49.015 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.015 [2024-12-13 11:12:09.469825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.015 [2024-12-13 11:12:09.537464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.951 11:12:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.951 11:12:10 -- common/autotest_common.sh@862 -- # return 0 00:17:49.951 11:12:10 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:49.951 Nvme0n1 00:17:49.951 11:12:10 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:50.210 [ 00:17:50.210 { 00:17:50.210 "name": "Nvme0n1", 00:17:50.210 "aliases": [ 00:17:50.210 "219cfdf3-fd47-46b5-b83b-033e33af625a" 00:17:50.210 ], 00:17:50.210 "product_name": "NVMe disk", 00:17:50.210 "block_size": 4096, 00:17:50.210 "num_blocks": 38912, 00:17:50.210 "uuid": "219cfdf3-fd47-46b5-b83b-033e33af625a", 00:17:50.210 "assigned_rate_limits": { 00:17:50.210 "rw_ios_per_sec": 0, 00:17:50.210 "rw_mbytes_per_sec": 0, 00:17:50.210 "r_mbytes_per_sec": 0, 00:17:50.210 "w_mbytes_per_sec": 0 00:17:50.210 }, 00:17:50.210 "claimed": false, 00:17:50.210 "zoned": false, 00:17:50.210 "supported_io_types": { 00:17:50.210 "read": true, 00:17:50.210 "write": true, 00:17:50.210 "unmap": true, 00:17:50.210 "write_zeroes": true, 00:17:50.210 "flush": true, 00:17:50.210 "reset": true, 00:17:50.210 "compare": true, 00:17:50.210 "compare_and_write": true, 00:17:50.210 "abort": true, 00:17:50.210 "nvme_admin": true, 00:17:50.210 "nvme_io": true 00:17:50.210 }, 00:17:50.210 "memory_domains": [ 00:17:50.210 { 00:17:50.210 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:50.210 "dma_device_type": 0 00:17:50.210 } 00:17:50.210 ], 00:17:50.210 "driver_specific": { 00:17:50.210 "nvme": [ 00:17:50.210 { 00:17:50.210 "trid": { 00:17:50.210 "trtype": "RDMA", 00:17:50.210 "adrfam": "IPv4", 00:17:50.210 "traddr": "192.168.100.8", 00:17:50.210 "trsvcid": "4420", 00:17:50.210 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.210 }, 00:17:50.210 "ctrlr_data": { 00:17:50.210 "cntlid": 1, 00:17:50.210 "vendor_id": "0x8086", 00:17:50.210 "model_number": "SPDK bdev Controller", 00:17:50.210 "serial_number": "SPDK0", 00:17:50.210 "firmware_revision": "24.01.1", 00:17:50.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.210 "oacs": { 00:17:50.210 "security": 0, 00:17:50.210 "format": 0, 00:17:50.210 "firmware": 0, 00:17:50.210 "ns_manage": 0 00:17:50.210 }, 00:17:50.210 "multi_ctrlr": true, 00:17:50.210 "ana_reporting": false 00:17:50.210 }, 00:17:50.210 "vs": { 00:17:50.210 "nvme_version": "1.3" 00:17:50.210 }, 00:17:50.210 "ns_data": { 00:17:50.210 "id": 1, 00:17:50.210 "can_share": true 00:17:50.210 } 00:17:50.210 } 00:17:50.210 ], 00:17:50.210 "mp_policy": "active_passive" 00:17:50.210 } 00:17:50.210 } 00:17:50.210 ] 00:17:50.210 11:12:10 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1621823 00:17:50.210 11:12:10 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:50.210 11:12:10 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.210 Running I/O for 10 seconds... 00:17:51.588 Latency(us) 00:17:51.588 [2024-12-13T10:12:12.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.588 [2024-12-13T10:12:12.157Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.588 Nvme0n1 : 1.00 39136.00 152.88 0.00 0.00 0.00 0.00 0.00 00:17:51.588 [2024-12-13T10:12:12.157Z] =================================================================================================================== 00:17:51.588 [2024-12-13T10:12:12.157Z] Total : 39136.00 152.88 0.00 0.00 0.00 0.00 0.00 00:17:51.588 00:17:52.282 11:12:12 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:17:52.282 [2024-12-13T10:12:12.851Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.282 Nvme0n1 : 2.00 39440.00 154.06 0.00 0.00 0.00 0.00 0.00 00:17:52.282 [2024-12-13T10:12:12.851Z] =================================================================================================================== 00:17:52.282 [2024-12-13T10:12:12.851Z] Total : 39440.00 154.06 0.00 0.00 0.00 0.00 0.00 00:17:52.282 00:17:52.282 true 00:17:52.282 11:12:12 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:17:52.282 11:12:12 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:52.549 11:12:12 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:52.549 11:12:12 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:52.549 11:12:12 -- target/nvmf_lvs_grow.sh@65 -- # wait 1621823 00:17:53.484 [2024-12-13T10:12:14.053Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.484 Nvme0n1 : 3.00 39594.67 154.67 0.00 0.00 0.00 0.00 0.00 00:17:53.484 [2024-12-13T10:12:14.053Z] =================================================================================================================== 00:17:53.484 [2024-12-13T10:12:14.053Z] Total : 39594.67 154.67 0.00 0.00 0.00 0.00 0.00 00:17:53.484 00:17:54.419 [2024-12-13T10:12:14.988Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.419 Nvme0n1 : 4.00 39720.00 155.16 0.00 0.00 0.00 0.00 0.00 00:17:54.419 [2024-12-13T10:12:14.989Z] =================================================================================================================== 00:17:54.420 [2024-12-13T10:12:14.989Z] Total : 39720.00 155.16 0.00 0.00 0.00 0.00 0.00 00:17:54.420 00:17:55.361 [2024-12-13T10:12:15.930Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.361 Nvme0n1 : 5.00 39808.40 155.50 0.00 0.00 0.00 0.00 0.00 00:17:55.361 [2024-12-13T10:12:15.930Z] =================================================================================================================== 00:17:55.361 [2024-12-13T10:12:15.930Z] Total : 39808.40 155.50 0.00 0.00 0.00 0.00 0.00 00:17:55.361 00:17:56.298 [2024-12-13T10:12:16.867Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.298 Nvme0n1 : 6.00 39883.00 155.79 0.00 0.00 0.00 0.00 0.00 00:17:56.298 [2024-12-13T10:12:16.867Z] =================================================================================================================== 00:17:56.298 [2024-12-13T10:12:16.867Z] Total : 39883.00 155.79 0.00 0.00 0.00 0.00 0.00 00:17:56.298 00:17:57.235 [2024-12-13T10:12:17.804Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.235 Nvme0n1 : 7.00 39922.57 155.95 0.00 0.00 0.00 0.00 0.00 00:17:57.235 [2024-12-13T10:12:17.804Z] =================================================================================================================== 00:17:57.235 [2024-12-13T10:12:17.804Z] Total : 39922.57 155.95 0.00 0.00 0.00 0.00 0.00 00:17:57.235 00:17:58.614 [2024-12-13T10:12:19.183Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.614 Nvme0n1 : 8.00 39952.12 156.06 0.00 0.00 0.00 0.00 0.00 00:17:58.614 [2024-12-13T10:12:19.183Z] =================================================================================================================== 00:17:58.614 [2024-12-13T10:12:19.183Z] Total : 39952.12 156.06 0.00 0.00 0.00 0.00 0.00 00:17:58.614 00:17:59.182 [2024-12-13T10:12:19.751Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.182 Nvme0n1 : 9.00 39986.11 156.20 0.00 0.00 0.00 0.00 0.00 00:17:59.182 [2024-12-13T10:12:19.751Z] =================================================================================================================== 00:17:59.182 [2024-12-13T10:12:19.751Z] Total : 39986.11 156.20 0.00 0.00 0.00 0.00 0.00 00:17:59.182 00:18:00.560 [2024-12-13T10:12:21.129Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.560 Nvme0n1 : 10.00 39919.50 155.94 0.00 0.00 0.00 0.00 0.00 00:18:00.560 [2024-12-13T10:12:21.129Z] =================================================================================================================== 00:18:00.560 [2024-12-13T10:12:21.129Z] Total : 39919.50 155.94 0.00 0.00 0.00 0.00 0.00 00:18:00.560 00:18:00.560 00:18:00.560 Latency(us) 00:18:00.560 [2024-12-13T10:12:21.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.560 [2024-12-13T10:12:21.129Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.560 Nvme0n1 : 10.00 39919.74 155.94 0.00 0.00 3203.85 2099.58 11650.84 00:18:00.560 [2024-12-13T10:12:21.129Z] =================================================================================================================== 00:18:00.560 [2024-12-13T10:12:21.129Z] Total : 39919.74 155.94 0.00 0.00 3203.85 2099.58 11650.84 00:18:00.560 0 00:18:00.560 11:12:20 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1621551 00:18:00.560 11:12:20 -- common/autotest_common.sh@936 -- # '[' -z 1621551 ']' 00:18:00.560 11:12:20 -- common/autotest_common.sh@940 -- # kill -0 1621551 00:18:00.560 11:12:20 -- common/autotest_common.sh@941 -- # uname 00:18:00.560 11:12:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.560 11:12:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1621551 00:18:00.560 11:12:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:00.560 11:12:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:00.560 11:12:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1621551' 00:18:00.560 killing process with pid 1621551 00:18:00.560 11:12:20 -- common/autotest_common.sh@955 -- # kill 1621551 00:18:00.560 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.560 00:18:00.560 Latency(us) 00:18:00.560 [2024-12-13T10:12:21.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.560 [2024-12-13T10:12:21.129Z] =================================================================================================================== 00:18:00.560 [2024-12-13T10:12:21.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.560 11:12:20 -- common/autotest_common.sh@960 -- # wait 1621551 00:18:00.560 11:12:21 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.818 11:12:21 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:00.818 11:12:21 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:01.077 11:12:21 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:01.077 11:12:21 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:01.077 11:12:21 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1617621 00:18:01.077 11:12:21 -- target/nvmf_lvs_grow.sh@74 -- # wait 1617621 00:18:01.077 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1617621 Killed "${NVMF_APP[@]}" "$@" 00:18:01.077 11:12:21 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:01.077 11:12:21 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:01.077 11:12:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.077 11:12:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.077 11:12:21 -- common/autotest_common.sh@10 -- # set +x 00:18:01.077 11:12:21 -- nvmf/common.sh@469 -- # nvmfpid=1623801 00:18:01.077 11:12:21 -- nvmf/common.sh@470 -- # waitforlisten 1623801 00:18:01.077 11:12:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:01.077 11:12:21 -- common/autotest_common.sh@829 -- # '[' -z 1623801 ']' 00:18:01.077 11:12:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.077 11:12:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.078 11:12:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.078 11:12:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.078 11:12:21 -- common/autotest_common.sh@10 -- # set +x 00:18:01.078 [2024-12-13 11:12:21.479843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.078 [2024-12-13 11:12:21.479892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.078 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.078 [2024-12-13 11:12:21.532031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.078 [2024-12-13 11:12:21.602054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:01.078 [2024-12-13 11:12:21.602152] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.078 [2024-12-13 11:12:21.602158] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.078 [2024-12-13 11:12:21.602165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.078 [2024-12-13 11:12:21.602184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.015 11:12:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.015 11:12:22 -- common/autotest_common.sh@862 -- # return 0 00:18:02.015 11:12:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.015 11:12:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.015 11:12:22 -- common/autotest_common.sh@10 -- # set +x 00:18:02.015 11:12:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.015 11:12:22 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:02.015 [2024-12-13 11:12:22.441394] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:02.015 [2024-12-13 11:12:22.441473] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:02.015 [2024-12-13 11:12:22.441496] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:02.015 11:12:22 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:02.015 11:12:22 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 219cfdf3-fd47-46b5-b83b-033e33af625a 00:18:02.015 11:12:22 -- common/autotest_common.sh@897 -- # local bdev_name=219cfdf3-fd47-46b5-b83b-033e33af625a 00:18:02.015 11:12:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:02.015 11:12:22 -- common/autotest_common.sh@899 -- # local i 00:18:02.015 11:12:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:02.015 11:12:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:02.015 11:12:22 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:02.273 11:12:22 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 219cfdf3-fd47-46b5-b83b-033e33af625a -t 2000 00:18:02.273 [ 00:18:02.273 { 00:18:02.273 "name": "219cfdf3-fd47-46b5-b83b-033e33af625a", 00:18:02.273 "aliases": [ 00:18:02.273 "lvs/lvol" 00:18:02.273 ], 00:18:02.273 "product_name": "Logical Volume", 00:18:02.273 "block_size": 4096, 00:18:02.273 "num_blocks": 38912, 00:18:02.273 "uuid": "219cfdf3-fd47-46b5-b83b-033e33af625a", 00:18:02.273 "assigned_rate_limits": { 00:18:02.273 "rw_ios_per_sec": 0, 00:18:02.273 "rw_mbytes_per_sec": 0, 00:18:02.273 "r_mbytes_per_sec": 0, 00:18:02.273 "w_mbytes_per_sec": 0 00:18:02.273 }, 00:18:02.273 "claimed": false, 00:18:02.273 "zoned": false, 00:18:02.273 "supported_io_types": { 00:18:02.273 "read": true, 00:18:02.273 "write": true, 00:18:02.273 "unmap": true, 00:18:02.273 "write_zeroes": true, 00:18:02.273 "flush": false, 00:18:02.273 "reset": true, 00:18:02.273 "compare": false, 00:18:02.273 "compare_and_write": false, 00:18:02.273 "abort": false, 00:18:02.273 "nvme_admin": false, 00:18:02.273 "nvme_io": false 00:18:02.273 }, 00:18:02.273 "driver_specific": { 00:18:02.273 "lvol": { 00:18:02.273 "lvol_store_uuid": "b29d9f68-6f02-49ca-b5fe-b513c987b1c6", 00:18:02.273 "base_bdev": "aio_bdev", 00:18:02.273 "thin_provision": false, 00:18:02.273 "snapshot": false, 00:18:02.273 "clone": false, 00:18:02.273 "esnap_clone": false 00:18:02.273 } 00:18:02.273 } 00:18:02.273 } 00:18:02.273 ] 00:18:02.273 11:12:22 -- common/autotest_common.sh@905 -- # return 0 00:18:02.273 11:12:22 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:02.273 11:12:22 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:02.532 11:12:22 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:02.532 11:12:22 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:02.532 11:12:22 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:02.791 11:12:23 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:02.791 11:12:23 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:02.791 [2024-12-13 11:12:23.277873] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:02.791 11:12:23 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:02.791 11:12:23 -- common/autotest_common.sh@650 -- # local es=0 00:18:02.791 11:12:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:02.791 11:12:23 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:02.791 11:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.791 11:12:23 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:02.791 11:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.791 11:12:23 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:02.791 11:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.791 11:12:23 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:02.791 11:12:23 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:02.791 11:12:23 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:03.050 request: 00:18:03.050 { 00:18:03.050 "uuid": "b29d9f68-6f02-49ca-b5fe-b513c987b1c6", 00:18:03.050 "method": "bdev_lvol_get_lvstores", 00:18:03.051 "req_id": 1 00:18:03.051 } 00:18:03.051 Got JSON-RPC error response 00:18:03.051 response: 00:18:03.051 { 00:18:03.051 "code": -19, 00:18:03.051 "message": "No such device" 00:18:03.051 } 00:18:03.051 11:12:23 -- common/autotest_common.sh@653 -- # es=1 00:18:03.051 11:12:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.051 11:12:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.051 11:12:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.051 11:12:23 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.309 aio_bdev 00:18:03.309 11:12:23 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 219cfdf3-fd47-46b5-b83b-033e33af625a 00:18:03.309 11:12:23 -- common/autotest_common.sh@897 -- # local bdev_name=219cfdf3-fd47-46b5-b83b-033e33af625a 00:18:03.309 11:12:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.309 11:12:23 -- common/autotest_common.sh@899 -- # local i 00:18:03.309 11:12:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.309 11:12:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.309 11:12:23 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:03.309 11:12:23 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 219cfdf3-fd47-46b5-b83b-033e33af625a -t 2000 00:18:03.568 [ 00:18:03.568 { 00:18:03.568 "name": "219cfdf3-fd47-46b5-b83b-033e33af625a", 00:18:03.568 "aliases": [ 00:18:03.568 "lvs/lvol" 00:18:03.568 ], 00:18:03.569 "product_name": "Logical Volume", 00:18:03.569 "block_size": 4096, 00:18:03.569 "num_blocks": 38912, 00:18:03.569 "uuid": "219cfdf3-fd47-46b5-b83b-033e33af625a", 00:18:03.569 "assigned_rate_limits": { 00:18:03.569 "rw_ios_per_sec": 0, 00:18:03.569 "rw_mbytes_per_sec": 0, 00:18:03.569 "r_mbytes_per_sec": 0, 00:18:03.569 "w_mbytes_per_sec": 0 00:18:03.569 }, 00:18:03.569 "claimed": false, 00:18:03.569 "zoned": false, 00:18:03.569 "supported_io_types": { 00:18:03.569 "read": true, 00:18:03.569 "write": true, 00:18:03.569 "unmap": true, 00:18:03.569 "write_zeroes": true, 00:18:03.569 "flush": false, 00:18:03.569 "reset": true, 00:18:03.569 "compare": false, 00:18:03.569 "compare_and_write": false, 00:18:03.569 "abort": false, 00:18:03.569 "nvme_admin": false, 00:18:03.569 "nvme_io": false 00:18:03.569 }, 00:18:03.569 "driver_specific": { 00:18:03.569 "lvol": { 00:18:03.569 "lvol_store_uuid": "b29d9f68-6f02-49ca-b5fe-b513c987b1c6", 00:18:03.569 "base_bdev": "aio_bdev", 00:18:03.569 "thin_provision": false, 00:18:03.569 "snapshot": false, 00:18:03.569 "clone": false, 00:18:03.569 "esnap_clone": false 00:18:03.569 } 00:18:03.569 } 00:18:03.569 } 00:18:03.569 ] 00:18:03.569 11:12:23 -- common/autotest_common.sh@905 -- # return 0 00:18:03.569 11:12:23 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:03.569 11:12:23 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:03.569 11:12:24 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:03.569 11:12:24 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:03.569 11:12:24 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:03.828 11:12:24 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:03.828 11:12:24 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 219cfdf3-fd47-46b5-b83b-033e33af625a 00:18:04.087 11:12:24 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b29d9f68-6f02-49ca-b5fe-b513c987b1c6 00:18:04.087 11:12:24 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:04.345 11:12:24 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:04.345 00:18:04.345 real 0m17.080s 00:18:04.345 user 0m44.563s 00:18:04.345 sys 0m2.756s 00:18:04.345 11:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:04.345 11:12:24 -- common/autotest_common.sh@10 -- # set +x 00:18:04.345 ************************************ 00:18:04.345 END TEST lvs_grow_dirty 00:18:04.345 ************************************ 00:18:04.345 11:12:24 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:04.345 11:12:24 -- common/autotest_common.sh@806 -- # type=--id 00:18:04.345 11:12:24 -- common/autotest_common.sh@807 -- # id=0 00:18:04.345 11:12:24 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:04.345 11:12:24 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:04.345 11:12:24 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:04.345 11:12:24 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:04.345 11:12:24 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:04.345 11:12:24 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:04.345 nvmf_trace.0 00:18:04.345 11:12:24 -- common/autotest_common.sh@821 -- # return 0 00:18:04.345 11:12:24 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:04.345 11:12:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:04.345 11:12:24 -- nvmf/common.sh@116 -- # sync 00:18:04.345 11:12:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:04.345 11:12:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:04.345 11:12:24 -- nvmf/common.sh@119 -- # set +e 00:18:04.345 11:12:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:04.345 11:12:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:04.345 rmmod nvme_rdma 00:18:04.604 rmmod nvme_fabrics 00:18:04.604 11:12:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:04.604 11:12:24 -- nvmf/common.sh@123 -- # set -e 00:18:04.604 11:12:24 -- nvmf/common.sh@124 -- # return 0 00:18:04.604 11:12:24 -- nvmf/common.sh@477 -- # '[' -n 1623801 ']' 00:18:04.604 11:12:24 -- nvmf/common.sh@478 -- # killprocess 1623801 00:18:04.604 11:12:24 -- common/autotest_common.sh@936 -- # '[' -z 1623801 ']' 00:18:04.604 11:12:24 -- common/autotest_common.sh@940 -- # kill -0 1623801 00:18:04.604 11:12:24 -- common/autotest_common.sh@941 -- # uname 00:18:04.604 11:12:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.604 11:12:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1623801 00:18:04.604 11:12:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.604 11:12:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.604 11:12:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1623801' 00:18:04.604 killing process with pid 1623801 00:18:04.604 11:12:24 -- common/autotest_common.sh@955 -- # kill 1623801 00:18:04.604 11:12:24 -- common/autotest_common.sh@960 -- # wait 1623801 00:18:04.869 11:12:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:04.869 11:12:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:04.869 00:18:04.869 real 0m39.425s 00:18:04.869 user 1m5.531s 00:18:04.869 sys 0m8.157s 00:18:04.869 11:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:04.869 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:18:04.869 ************************************ 00:18:04.869 END TEST nvmf_lvs_grow 00:18:04.869 ************************************ 00:18:04.869 11:12:25 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:04.870 11:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:04.870 11:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.870 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:18:04.870 ************************************ 00:18:04.870 START TEST nvmf_bdev_io_wait 00:18:04.870 ************************************ 00:18:04.870 11:12:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:04.870 * Looking for test storage... 00:18:04.870 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:04.870 11:12:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:04.870 11:12:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:04.870 11:12:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:04.870 11:12:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:04.870 11:12:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:04.870 11:12:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:04.870 11:12:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:04.870 11:12:25 -- scripts/common.sh@335 -- # IFS=.-: 00:18:04.870 11:12:25 -- scripts/common.sh@335 -- # read -ra ver1 00:18:04.870 11:12:25 -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.870 11:12:25 -- scripts/common.sh@336 -- # read -ra ver2 00:18:04.870 11:12:25 -- scripts/common.sh@337 -- # local 'op=<' 00:18:04.870 11:12:25 -- scripts/common.sh@339 -- # ver1_l=2 00:18:04.870 11:12:25 -- scripts/common.sh@340 -- # ver2_l=1 00:18:04.870 11:12:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:04.870 11:12:25 -- scripts/common.sh@343 -- # case "$op" in 00:18:04.870 11:12:25 -- scripts/common.sh@344 -- # : 1 00:18:04.870 11:12:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:04.870 11:12:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.870 11:12:25 -- scripts/common.sh@364 -- # decimal 1 00:18:04.870 11:12:25 -- scripts/common.sh@352 -- # local d=1 00:18:04.870 11:12:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.870 11:12:25 -- scripts/common.sh@354 -- # echo 1 00:18:04.870 11:12:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:04.870 11:12:25 -- scripts/common.sh@365 -- # decimal 2 00:18:04.870 11:12:25 -- scripts/common.sh@352 -- # local d=2 00:18:04.870 11:12:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.870 11:12:25 -- scripts/common.sh@354 -- # echo 2 00:18:04.870 11:12:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:04.870 11:12:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:04.870 11:12:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:04.870 11:12:25 -- scripts/common.sh@367 -- # return 0 00:18:04.870 11:12:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.870 11:12:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.870 --rc genhtml_branch_coverage=1 00:18:04.870 --rc genhtml_function_coverage=1 00:18:04.870 --rc genhtml_legend=1 00:18:04.870 --rc geninfo_all_blocks=1 00:18:04.870 --rc geninfo_unexecuted_blocks=1 00:18:04.870 00:18:04.870 ' 00:18:04.870 11:12:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.870 --rc genhtml_branch_coverage=1 00:18:04.870 --rc genhtml_function_coverage=1 00:18:04.870 --rc genhtml_legend=1 00:18:04.870 --rc geninfo_all_blocks=1 00:18:04.870 --rc geninfo_unexecuted_blocks=1 00:18:04.870 00:18:04.870 ' 00:18:04.870 11:12:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.870 --rc genhtml_branch_coverage=1 00:18:04.870 --rc genhtml_function_coverage=1 00:18:04.870 --rc genhtml_legend=1 00:18:04.870 --rc geninfo_all_blocks=1 00:18:04.870 --rc geninfo_unexecuted_blocks=1 00:18:04.870 00:18:04.870 ' 00:18:04.870 11:12:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.870 --rc genhtml_branch_coverage=1 00:18:04.870 --rc genhtml_function_coverage=1 00:18:04.870 --rc genhtml_legend=1 00:18:04.870 --rc geninfo_all_blocks=1 00:18:04.870 --rc geninfo_unexecuted_blocks=1 00:18:04.870 00:18:04.870 ' 00:18:04.870 11:12:25 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.870 11:12:25 -- nvmf/common.sh@7 -- # uname -s 00:18:04.870 11:12:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.870 11:12:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.870 11:12:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.870 11:12:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.870 11:12:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.870 11:12:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.870 11:12:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.870 11:12:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.870 11:12:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.870 11:12:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.870 11:12:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:04.870 11:12:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:04.870 11:12:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.870 11:12:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.870 11:12:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.870 11:12:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:04.870 11:12:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.870 11:12:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.870 11:12:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.870 11:12:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.871 11:12:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.871 11:12:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.871 11:12:25 -- paths/export.sh@5 -- # export PATH 00:18:04.871 11:12:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.871 11:12:25 -- nvmf/common.sh@46 -- # : 0 00:18:04.871 11:12:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:04.871 11:12:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:04.871 11:12:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:04.871 11:12:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.871 11:12:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.871 11:12:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:04.871 11:12:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:04.871 11:12:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:04.871 11:12:25 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.871 11:12:25 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.871 11:12:25 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:04.871 11:12:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:04.871 11:12:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.871 11:12:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:04.871 11:12:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:04.871 11:12:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:04.871 11:12:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.871 11:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.871 11:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.871 11:12:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:04.871 11:12:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:04.871 11:12:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:04.871 11:12:25 -- common/autotest_common.sh@10 -- # set +x 00:18:10.144 11:12:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:10.144 11:12:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:10.144 11:12:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:10.144 11:12:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:10.144 11:12:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:10.144 11:12:30 -- nvmf/common.sh@294 -- # net_devs=() 00:18:10.144 11:12:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@295 -- # e810=() 00:18:10.144 11:12:30 -- nvmf/common.sh@295 -- # local -ga e810 00:18:10.144 11:12:30 -- nvmf/common.sh@296 -- # x722=() 00:18:10.144 11:12:30 -- nvmf/common.sh@296 -- # local -ga x722 00:18:10.144 11:12:30 -- nvmf/common.sh@297 -- # mlx=() 00:18:10.144 11:12:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:10.144 11:12:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.144 11:12:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:10.144 11:12:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:10.144 11:12:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:10.144 11:12:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:10.144 11:12:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:10.144 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:10.144 11:12:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.144 11:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:10.144 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:10.144 11:12:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.144 11:12:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.144 11:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.144 11:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:10.144 Found net devices under 0000:18:00.0: mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.144 11:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.144 11:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.144 11:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:10.144 Found net devices under 0000:18:00.1: mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.144 11:12:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:10.144 11:12:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:10.144 11:12:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:10.144 11:12:30 -- nvmf/common.sh@57 -- # uname 00:18:10.144 11:12:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:10.144 11:12:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:10.144 11:12:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:10.144 11:12:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:10.144 11:12:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:10.144 11:12:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:10.144 11:12:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:10.144 11:12:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:10.144 11:12:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:10.144 11:12:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.144 11:12:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:10.144 11:12:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:10.144 11:12:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.144 11:12:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@104 -- # continue 2 00:18:10.144 11:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@104 -- # continue 2 00:18:10.144 11:12:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:10.144 11:12:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:10.144 11:12:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:10.144 11:12:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:10.144 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.144 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:10.144 altname enp24s0f0np0 00:18:10.144 altname ens785f0np0 00:18:10.144 inet 192.168.100.8/24 scope global mlx_0_0 00:18:10.144 valid_lft forever preferred_lft forever 00:18:10.144 11:12:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:10.144 11:12:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:10.144 11:12:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:10.144 11:12:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:10.144 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.144 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:10.144 altname enp24s0f1np1 00:18:10.144 altname ens785f1np1 00:18:10.144 inet 192.168.100.9/24 scope global mlx_0_1 00:18:10.144 valid_lft forever preferred_lft forever 00:18:10.144 11:12:30 -- nvmf/common.sh@410 -- # return 0 00:18:10.144 11:12:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.144 11:12:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.144 11:12:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:10.144 11:12:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:10.144 11:12:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:10.144 11:12:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:10.144 11:12:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.144 11:12:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:10.144 11:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@104 -- # continue 2 00:18:10.144 11:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.144 11:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.144 11:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@104 -- # continue 2 00:18:10.144 11:12:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:10.144 11:12:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:10.144 11:12:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:10.144 11:12:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:10.144 11:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:10.144 11:12:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:10.144 192.168.100.9' 00:18:10.144 11:12:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:10.144 192.168.100.9' 00:18:10.144 11:12:30 -- nvmf/common.sh@445 -- # head -n 1 00:18:10.144 11:12:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:10.144 11:12:30 -- nvmf/common.sh@446 -- # head -n 1 00:18:10.144 11:12:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:10.144 192.168.100.9' 00:18:10.144 11:12:30 -- nvmf/common.sh@446 -- # tail -n +2 00:18:10.144 11:12:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:10.144 11:12:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:10.144 11:12:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:10.144 11:12:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:10.144 11:12:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:10.144 11:12:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:10.144 11:12:30 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:10.144 11:12:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.144 11:12:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.144 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:18:10.144 11:12:30 -- nvmf/common.sh@469 -- # nvmfpid=1627687 00:18:10.144 11:12:30 -- nvmf/common.sh@470 -- # waitforlisten 1627687 00:18:10.144 11:12:30 -- common/autotest_common.sh@829 -- # '[' -z 1627687 ']' 00:18:10.144 11:12:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.144 11:12:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.144 11:12:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:10.144 11:12:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.144 11:12:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.144 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:18:10.144 [2024-12-13 11:12:30.432723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:10.144 [2024-12-13 11:12:30.432766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.144 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.144 [2024-12-13 11:12:30.484668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.144 [2024-12-13 11:12:30.557297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.144 [2024-12-13 11:12:30.557404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.144 [2024-12-13 11:12:30.557411] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.144 [2024-12-13 11:12:30.557418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.144 [2024-12-13 11:12:30.557459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.144 [2024-12-13 11:12:30.557489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.144 [2024-12-13 11:12:30.557571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.144 [2024-12-13 11:12:30.557572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.712 11:12:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.712 11:12:31 -- common/autotest_common.sh@862 -- # return 0 00:18:10.712 11:12:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.712 11:12:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.712 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.712 11:12:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.712 11:12:31 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:10.712 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.712 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.712 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.712 11:12:31 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:10.712 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.712 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:10.972 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.972 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 [2024-12-13 11:12:31.348935] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23e9960/0x23ede50) succeed. 00:18:10.972 [2024-12-13 11:12:31.356890] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23eaf50/0x242f4f0) succeed. 00:18:10.972 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:10.972 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.972 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 Malloc0 00:18:10.972 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:10.972 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.972 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.972 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.972 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:10.972 11:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.972 11:12:31 -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 [2024-12-13 11:12:31.521482] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:10.972 11:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1627852 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@30 -- # READ_PID=1627854 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # config=() 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:10.972 11:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.972 { 00:18:10.972 "params": { 00:18:10.972 "name": "Nvme$subsystem", 00:18:10.972 "trtype": "$TEST_TRANSPORT", 00:18:10.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.972 "adrfam": "ipv4", 00:18:10.972 "trsvcid": "$NVMF_PORT", 00:18:10.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.972 "hdgst": ${hdgst:-false}, 00:18:10.972 "ddgst": ${ddgst:-false} 00:18:10.972 }, 00:18:10.972 "method": "bdev_nvme_attach_controller" 00:18:10.972 } 00:18:10.972 EOF 00:18:10.972 )") 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1627856 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # config=() 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:10.972 11:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.972 { 00:18:10.972 "params": { 00:18:10.972 "name": "Nvme$subsystem", 00:18:10.972 "trtype": "$TEST_TRANSPORT", 00:18:10.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.972 "adrfam": "ipv4", 00:18:10.972 "trsvcid": "$NVMF_PORT", 00:18:10.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.972 "hdgst": ${hdgst:-false}, 00:18:10.972 "ddgst": ${ddgst:-false} 00:18:10.972 }, 00:18:10.972 "method": "bdev_nvme_attach_controller" 00:18:10.972 } 00:18:10.972 EOF 00:18:10.972 )") 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1627859 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # cat 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@35 -- # sync 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # config=() 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:10.972 11:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.972 { 00:18:10.972 "params": { 00:18:10.972 "name": "Nvme$subsystem", 00:18:10.972 "trtype": "$TEST_TRANSPORT", 00:18:10.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.972 "adrfam": "ipv4", 00:18:10.972 "trsvcid": "$NVMF_PORT", 00:18:10.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.972 "hdgst": ${hdgst:-false}, 00:18:10.972 "ddgst": ${ddgst:-false} 00:18:10.972 }, 00:18:10.972 "method": "bdev_nvme_attach_controller" 00:18:10.972 } 00:18:10.972 EOF 00:18:10.972 )") 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # config=() 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # cat 00:18:10.972 11:12:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:10.972 11:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.972 { 00:18:10.972 "params": { 00:18:10.972 "name": "Nvme$subsystem", 00:18:10.972 "trtype": "$TEST_TRANSPORT", 00:18:10.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.972 "adrfam": "ipv4", 00:18:10.972 "trsvcid": "$NVMF_PORT", 00:18:10.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.972 "hdgst": ${hdgst:-false}, 00:18:10.972 "ddgst": ${ddgst:-false} 00:18:10.972 }, 00:18:10.972 "method": "bdev_nvme_attach_controller" 00:18:10.972 } 00:18:10.972 EOF 00:18:10.972 )") 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # cat 00:18:10.972 11:12:31 -- target/bdev_io_wait.sh@37 -- # wait 1627852 00:18:10.972 11:12:31 -- nvmf/common.sh@542 -- # cat 00:18:10.972 11:12:31 -- nvmf/common.sh@544 -- # jq . 00:18:10.972 11:12:31 -- nvmf/common.sh@544 -- # jq . 00:18:10.972 11:12:31 -- nvmf/common.sh@544 -- # jq . 00:18:10.972 11:12:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:11.231 11:12:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:11.231 "params": { 00:18:11.231 "name": "Nvme1", 00:18:11.231 "trtype": "rdma", 00:18:11.231 "traddr": "192.168.100.8", 00:18:11.231 "adrfam": "ipv4", 00:18:11.231 "trsvcid": "4420", 00:18:11.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.231 "hdgst": false, 00:18:11.231 "ddgst": false 00:18:11.231 }, 00:18:11.231 "method": "bdev_nvme_attach_controller" 00:18:11.231 }' 00:18:11.231 11:12:31 -- nvmf/common.sh@544 -- # jq . 00:18:11.231 11:12:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:11.231 11:12:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:11.231 "params": { 00:18:11.231 "name": "Nvme1", 00:18:11.231 "trtype": "rdma", 00:18:11.231 "traddr": "192.168.100.8", 00:18:11.231 "adrfam": "ipv4", 00:18:11.231 "trsvcid": "4420", 00:18:11.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.231 "hdgst": false, 00:18:11.231 "ddgst": false 00:18:11.231 }, 00:18:11.231 "method": "bdev_nvme_attach_controller" 00:18:11.231 }' 00:18:11.231 11:12:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:11.231 11:12:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:11.231 "params": { 00:18:11.231 "name": "Nvme1", 00:18:11.231 "trtype": "rdma", 00:18:11.231 "traddr": "192.168.100.8", 00:18:11.231 "adrfam": "ipv4", 00:18:11.231 "trsvcid": "4420", 00:18:11.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.231 "hdgst": false, 00:18:11.231 "ddgst": false 00:18:11.231 }, 00:18:11.231 "method": "bdev_nvme_attach_controller" 00:18:11.231 }' 00:18:11.231 11:12:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:11.231 11:12:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:11.231 "params": { 00:18:11.231 "name": "Nvme1", 00:18:11.231 "trtype": "rdma", 00:18:11.231 "traddr": "192.168.100.8", 00:18:11.231 "adrfam": "ipv4", 00:18:11.231 "trsvcid": "4420", 00:18:11.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.231 "hdgst": false, 00:18:11.231 "ddgst": false 00:18:11.231 }, 00:18:11.231 "method": "bdev_nvme_attach_controller" 00:18:11.231 }' 00:18:11.231 [2024-12-13 11:12:31.566653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.231 [2024-12-13 11:12:31.566655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.231 [2024-12-13 11:12:31.566699] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:11.231 [2024-12-13 11:12:31.566700] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:11.231 [2024-12-13 11:12:31.568154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.231 [2024-12-13 11:12:31.568191] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:11.231 [2024-12-13 11:12:31.569052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.231 [2024-12-13 11:12:31.569098] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:11.231 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.231 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.231 [2024-12-13 11:12:31.741071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.231 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.490 [2024-12-13 11:12:31.812997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:11.490 [2024-12-13 11:12:31.831108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.490 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.490 [2024-12-13 11:12:31.901954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:11.490 [2024-12-13 11:12:31.923867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.490 [2024-12-13 11:12:31.966822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.490 [2024-12-13 11:12:32.001175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:11.490 [2024-12-13 11:12:32.039024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:11.749 Running I/O for 1 seconds... 00:18:11.749 Running I/O for 1 seconds... 00:18:11.749 Running I/O for 1 seconds... 00:18:11.749 Running I/O for 1 seconds... 00:18:12.686 00:18:12.686 Latency(us) 00:18:12.686 [2024-12-13T10:12:33.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.686 [2024-12-13T10:12:33.255Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:12.686 Nvme1n1 : 1.00 21474.09 83.88 0.00 0.00 5945.60 3094.76 13107.20 00:18:12.686 [2024-12-13T10:12:33.255Z] =================================================================================================================== 00:18:12.686 [2024-12-13T10:12:33.255Z] Total : 21474.09 83.88 0.00 0.00 5945.60 3094.76 13107.20 00:18:12.686 00:18:12.686 Latency(us) 00:18:12.686 [2024-12-13T10:12:33.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.686 [2024-12-13T10:12:33.255Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:12.686 Nvme1n1 : 1.01 15847.23 61.90 0.00 0.00 8052.68 5170.06 19806.44 00:18:12.686 [2024-12-13T10:12:33.255Z] =================================================================================================================== 00:18:12.686 [2024-12-13T10:12:33.255Z] Total : 15847.23 61.90 0.00 0.00 8052.68 5170.06 19806.44 00:18:12.686 00:18:12.686 Latency(us) 00:18:12.686 [2024-12-13T10:12:33.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.686 [2024-12-13T10:12:33.255Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:12.686 Nvme1n1 : 1.01 15282.83 59.70 0.00 0.00 8349.44 5437.06 19029.71 00:18:12.686 [2024-12-13T10:12:33.255Z] =================================================================================================================== 00:18:12.686 [2024-12-13T10:12:33.255Z] Total : 15282.83 59.70 0.00 0.00 8349.44 5437.06 19029.71 00:18:12.686 00:18:12.686 Latency(us) 00:18:12.686 [2024-12-13T10:12:33.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.686 [2024-12-13T10:12:33.255Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:12.686 Nvme1n1 : 1.00 280495.49 1095.69 0.00 0.00 455.06 188.87 1589.85 00:18:12.686 [2024-12-13T10:12:33.255Z] =================================================================================================================== 00:18:12.686 [2024-12-13T10:12:33.255Z] Total : 280495.49 1095.69 0.00 0.00 455.06 188.87 1589.85 00:18:12.945 11:12:33 -- target/bdev_io_wait.sh@38 -- # wait 1627854 00:18:12.945 11:12:33 -- target/bdev_io_wait.sh@39 -- # wait 1627856 00:18:12.945 11:12:33 -- target/bdev_io_wait.sh@40 -- # wait 1627859 00:18:12.945 11:12:33 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.945 11:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.945 11:12:33 -- common/autotest_common.sh@10 -- # set +x 00:18:12.945 11:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.945 11:12:33 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:12.945 11:12:33 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:12.945 11:12:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:12.945 11:12:33 -- nvmf/common.sh@116 -- # sync 00:18:12.945 11:12:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:12.945 11:12:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:12.945 11:12:33 -- nvmf/common.sh@119 -- # set +e 00:18:12.945 11:12:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:12.945 11:12:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:12.945 rmmod nvme_rdma 00:18:12.945 rmmod nvme_fabrics 00:18:13.204 11:12:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.204 11:12:33 -- nvmf/common.sh@123 -- # set -e 00:18:13.204 11:12:33 -- nvmf/common.sh@124 -- # return 0 00:18:13.204 11:12:33 -- nvmf/common.sh@477 -- # '[' -n 1627687 ']' 00:18:13.204 11:12:33 -- nvmf/common.sh@478 -- # killprocess 1627687 00:18:13.204 11:12:33 -- common/autotest_common.sh@936 -- # '[' -z 1627687 ']' 00:18:13.204 11:12:33 -- common/autotest_common.sh@940 -- # kill -0 1627687 00:18:13.204 11:12:33 -- common/autotest_common.sh@941 -- # uname 00:18:13.204 11:12:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.204 11:12:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1627687 00:18:13.204 11:12:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:13.204 11:12:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:13.204 11:12:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1627687' 00:18:13.204 killing process with pid 1627687 00:18:13.204 11:12:33 -- common/autotest_common.sh@955 -- # kill 1627687 00:18:13.204 11:12:33 -- common/autotest_common.sh@960 -- # wait 1627687 00:18:13.463 11:12:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.463 11:12:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:13.463 00:18:13.463 real 0m8.615s 00:18:13.463 user 0m19.946s 00:18:13.463 sys 0m4.891s 00:18:13.463 11:12:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:13.463 11:12:33 -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 ************************************ 00:18:13.463 END TEST nvmf_bdev_io_wait 00:18:13.463 ************************************ 00:18:13.463 11:12:33 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:13.463 11:12:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:13.463 11:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.463 11:12:33 -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 ************************************ 00:18:13.463 START TEST nvmf_queue_depth 00:18:13.463 ************************************ 00:18:13.463 11:12:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:13.463 * Looking for test storage... 00:18:13.463 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:13.463 11:12:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:13.463 11:12:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:13.463 11:12:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:13.463 11:12:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:13.463 11:12:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:13.463 11:12:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:13.463 11:12:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:13.463 11:12:34 -- scripts/common.sh@335 -- # IFS=.-: 00:18:13.463 11:12:34 -- scripts/common.sh@335 -- # read -ra ver1 00:18:13.463 11:12:34 -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.463 11:12:34 -- scripts/common.sh@336 -- # read -ra ver2 00:18:13.463 11:12:34 -- scripts/common.sh@337 -- # local 'op=<' 00:18:13.463 11:12:34 -- scripts/common.sh@339 -- # ver1_l=2 00:18:13.463 11:12:34 -- scripts/common.sh@340 -- # ver2_l=1 00:18:13.463 11:12:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:13.463 11:12:34 -- scripts/common.sh@343 -- # case "$op" in 00:18:13.463 11:12:34 -- scripts/common.sh@344 -- # : 1 00:18:13.463 11:12:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:13.463 11:12:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.463 11:12:34 -- scripts/common.sh@364 -- # decimal 1 00:18:13.463 11:12:34 -- scripts/common.sh@352 -- # local d=1 00:18:13.463 11:12:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.463 11:12:34 -- scripts/common.sh@354 -- # echo 1 00:18:13.463 11:12:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:13.463 11:12:34 -- scripts/common.sh@365 -- # decimal 2 00:18:13.463 11:12:34 -- scripts/common.sh@352 -- # local d=2 00:18:13.722 11:12:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.722 11:12:34 -- scripts/common.sh@354 -- # echo 2 00:18:13.722 11:12:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:13.722 11:12:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:13.722 11:12:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:13.722 11:12:34 -- scripts/common.sh@367 -- # return 0 00:18:13.722 11:12:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.722 11:12:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:13.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.722 --rc genhtml_branch_coverage=1 00:18:13.722 --rc genhtml_function_coverage=1 00:18:13.722 --rc genhtml_legend=1 00:18:13.722 --rc geninfo_all_blocks=1 00:18:13.722 --rc geninfo_unexecuted_blocks=1 00:18:13.722 00:18:13.722 ' 00:18:13.722 11:12:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:13.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.722 --rc genhtml_branch_coverage=1 00:18:13.722 --rc genhtml_function_coverage=1 00:18:13.722 --rc genhtml_legend=1 00:18:13.722 --rc geninfo_all_blocks=1 00:18:13.722 --rc geninfo_unexecuted_blocks=1 00:18:13.722 00:18:13.722 ' 00:18:13.722 11:12:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:13.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.722 --rc genhtml_branch_coverage=1 00:18:13.722 --rc genhtml_function_coverage=1 00:18:13.722 --rc genhtml_legend=1 00:18:13.722 --rc geninfo_all_blocks=1 00:18:13.722 --rc geninfo_unexecuted_blocks=1 00:18:13.722 00:18:13.722 ' 00:18:13.722 11:12:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:13.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.722 --rc genhtml_branch_coverage=1 00:18:13.722 --rc genhtml_function_coverage=1 00:18:13.722 --rc genhtml_legend=1 00:18:13.722 --rc geninfo_all_blocks=1 00:18:13.722 --rc geninfo_unexecuted_blocks=1 00:18:13.722 00:18:13.722 ' 00:18:13.722 11:12:34 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.722 11:12:34 -- nvmf/common.sh@7 -- # uname -s 00:18:13.722 11:12:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.722 11:12:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.722 11:12:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.722 11:12:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.722 11:12:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.722 11:12:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.722 11:12:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.722 11:12:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.722 11:12:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.722 11:12:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.722 11:12:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:13.722 11:12:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:13.722 11:12:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.722 11:12:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.722 11:12:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.722 11:12:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:13.722 11:12:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.722 11:12:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.722 11:12:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.722 11:12:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.722 11:12:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.722 11:12:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.723 11:12:34 -- paths/export.sh@5 -- # export PATH 00:18:13.723 11:12:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.723 11:12:34 -- nvmf/common.sh@46 -- # : 0 00:18:13.723 11:12:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.723 11:12:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.723 11:12:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.723 11:12:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.723 11:12:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.723 11:12:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.723 11:12:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.723 11:12:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.723 11:12:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:13.723 11:12:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:13.723 11:12:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.723 11:12:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:13.723 11:12:34 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:13.723 11:12:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.723 11:12:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.723 11:12:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.723 11:12:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.723 11:12:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.723 11:12:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.723 11:12:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.723 11:12:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:13.723 11:12:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:13.723 11:12:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:13.723 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.994 11:12:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:18.994 11:12:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:18.994 11:12:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:18.994 11:12:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:18.994 11:12:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:18.994 11:12:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:18.994 11:12:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:18.994 11:12:39 -- nvmf/common.sh@294 -- # net_devs=() 00:18:18.994 11:12:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:18.994 11:12:39 -- nvmf/common.sh@295 -- # e810=() 00:18:18.994 11:12:39 -- nvmf/common.sh@295 -- # local -ga e810 00:18:18.994 11:12:39 -- nvmf/common.sh@296 -- # x722=() 00:18:18.994 11:12:39 -- nvmf/common.sh@296 -- # local -ga x722 00:18:18.994 11:12:39 -- nvmf/common.sh@297 -- # mlx=() 00:18:18.994 11:12:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:18.994 11:12:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.994 11:12:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:18.994 11:12:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:18.994 11:12:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:18.994 11:12:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:18.994 11:12:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:18.994 11:12:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:18.994 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:18.994 11:12:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.994 11:12:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:18.994 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:18.994 11:12:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.994 11:12:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:18.994 11:12:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.994 11:12:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:18.994 11:12:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.994 11:12:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:18.994 Found net devices under 0000:18:00.0: mlx_0_0 00:18:18.994 11:12:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.994 11:12:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.994 11:12:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:18.994 11:12:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.994 11:12:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:18.994 Found net devices under 0000:18:00.1: mlx_0_1 00:18:18.994 11:12:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.994 11:12:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:18.994 11:12:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:18.994 11:12:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:18.994 11:12:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:18.994 11:12:39 -- nvmf/common.sh@57 -- # uname 00:18:18.994 11:12:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:18.994 11:12:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:18.994 11:12:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:18.994 11:12:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:18.994 11:12:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:18.994 11:12:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:18.994 11:12:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:18.994 11:12:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:18.994 11:12:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:18.994 11:12:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:18.994 11:12:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:18.994 11:12:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.994 11:12:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:18.994 11:12:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:18.994 11:12:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.994 11:12:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:18.994 11:12:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:18.994 11:12:39 -- nvmf/common.sh@104 -- # continue 2 00:18:18.994 11:12:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.994 11:12:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.994 11:12:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:18.994 11:12:39 -- nvmf/common.sh@104 -- # continue 2 00:18:18.994 11:12:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:18.994 11:12:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:18.994 11:12:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:18.994 11:12:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:18.995 11:12:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:18.995 11:12:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:18.995 11:12:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:18.995 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.995 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:18.995 altname enp24s0f0np0 00:18:18.995 altname ens785f0np0 00:18:18.995 inet 192.168.100.8/24 scope global mlx_0_0 00:18:18.995 valid_lft forever preferred_lft forever 00:18:18.995 11:12:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:18.995 11:12:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:18.995 11:12:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:18.995 11:12:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:18.995 11:12:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:18.995 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.995 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:18.995 altname enp24s0f1np1 00:18:18.995 altname ens785f1np1 00:18:18.995 inet 192.168.100.9/24 scope global mlx_0_1 00:18:18.995 valid_lft forever preferred_lft forever 00:18:18.995 11:12:39 -- nvmf/common.sh@410 -- # return 0 00:18:18.995 11:12:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:18.995 11:12:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:18.995 11:12:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:18.995 11:12:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:18.995 11:12:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:18.995 11:12:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.995 11:12:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:18.995 11:12:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:18.995 11:12:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.995 11:12:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:18.995 11:12:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:18.995 11:12:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.995 11:12:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.995 11:12:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:18.995 11:12:39 -- nvmf/common.sh@104 -- # continue 2 00:18:18.995 11:12:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:18.995 11:12:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.995 11:12:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.995 11:12:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.995 11:12:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.995 11:12:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@104 -- # continue 2 00:18:18.995 11:12:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:18.995 11:12:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:18.995 11:12:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:18.995 11:12:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:18.995 11:12:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:18.995 11:12:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:18.995 11:12:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:18.995 192.168.100.9' 00:18:18.995 11:12:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:18.995 192.168.100.9' 00:18:18.995 11:12:39 -- nvmf/common.sh@445 -- # head -n 1 00:18:18.995 11:12:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:18.995 11:12:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:18.995 192.168.100.9' 00:18:18.995 11:12:39 -- nvmf/common.sh@446 -- # tail -n +2 00:18:18.995 11:12:39 -- nvmf/common.sh@446 -- # head -n 1 00:18:18.995 11:12:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:18.995 11:12:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:18.995 11:12:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:18.995 11:12:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:19.254 11:12:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:19.254 11:12:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:19.254 11:12:39 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:19.254 11:12:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.254 11:12:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.254 11:12:39 -- common/autotest_common.sh@10 -- # set +x 00:18:19.254 11:12:39 -- nvmf/common.sh@469 -- # nvmfpid=1631631 00:18:19.254 11:12:39 -- nvmf/common.sh@470 -- # waitforlisten 1631631 00:18:19.254 11:12:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.254 11:12:39 -- common/autotest_common.sh@829 -- # '[' -z 1631631 ']' 00:18:19.254 11:12:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.254 11:12:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.254 11:12:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.254 11:12:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.254 11:12:39 -- common/autotest_common.sh@10 -- # set +x 00:18:19.254 [2024-12-13 11:12:39.624802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.254 [2024-12-13 11:12:39.624850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.254 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.254 [2024-12-13 11:12:39.675432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.254 [2024-12-13 11:12:39.746187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.254 [2024-12-13 11:12:39.746288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.254 [2024-12-13 11:12:39.746295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.254 [2024-12-13 11:12:39.746301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.254 [2024-12-13 11:12:39.746319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.190 11:12:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.190 11:12:40 -- common/autotest_common.sh@862 -- # return 0 00:18:20.190 11:12:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.190 11:12:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 11:12:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.190 11:12:40 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:20.190 11:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 [2024-12-13 11:12:40.477594] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a51af0/0x1a55fe0) succeed. 00:18:20.190 [2024-12-13 11:12:40.485655] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a52ff0/0x1a97680) succeed. 00:18:20.190 11:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.190 11:12:40 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:20.190 11:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 Malloc0 00:18:20.190 11:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.190 11:12:40 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:20.190 11:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 11:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.190 11:12:40 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:20.190 11:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 11:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.190 11:12:40 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:20.190 11:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 [2024-12-13 11:12:40.582600] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.190 11:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.190 11:12:40 -- target/queue_depth.sh@30 -- # bdevperf_pid=1631704 00:18:20.190 11:12:40 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.190 11:12:40 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:20.190 11:12:40 -- target/queue_depth.sh@33 -- # waitforlisten 1631704 /var/tmp/bdevperf.sock 00:18:20.190 11:12:40 -- common/autotest_common.sh@829 -- # '[' -z 1631704 ']' 00:18:20.190 11:12:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.190 11:12:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.190 11:12:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.190 11:12:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.190 11:12:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.190 [2024-12-13 11:12:40.626912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:20.190 [2024-12-13 11:12:40.626951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631704 ] 00:18:20.190 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.191 [2024-12-13 11:12:40.676630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.191 [2024-12-13 11:12:40.747748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.126 11:12:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.126 11:12:41 -- common/autotest_common.sh@862 -- # return 0 00:18:21.126 11:12:41 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.126 11:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.126 11:12:41 -- common/autotest_common.sh@10 -- # set +x 00:18:21.126 NVMe0n1 00:18:21.126 11:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.126 11:12:41 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.126 Running I/O for 10 seconds... 00:18:31.205 00:18:31.205 Latency(us) 00:18:31.205 [2024-12-13T10:12:51.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.205 [2024-12-13T10:12:51.774Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:31.205 Verification LBA range: start 0x0 length 0x4000 00:18:31.205 NVMe0n1 : 10.03 31128.34 121.60 0.00 0.00 32826.41 6941.96 32428.18 00:18:31.205 [2024-12-13T10:12:51.774Z] =================================================================================================================== 00:18:31.205 [2024-12-13T10:12:51.774Z] Total : 31128.34 121.60 0.00 0.00 32826.41 6941.96 32428.18 00:18:31.205 0 00:18:31.205 11:12:51 -- target/queue_depth.sh@39 -- # killprocess 1631704 00:18:31.205 11:12:51 -- common/autotest_common.sh@936 -- # '[' -z 1631704 ']' 00:18:31.205 11:12:51 -- common/autotest_common.sh@940 -- # kill -0 1631704 00:18:31.205 11:12:51 -- common/autotest_common.sh@941 -- # uname 00:18:31.205 11:12:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.205 11:12:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1631704 00:18:31.205 11:12:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.205 11:12:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.205 11:12:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1631704' 00:18:31.205 killing process with pid 1631704 00:18:31.205 11:12:51 -- common/autotest_common.sh@955 -- # kill 1631704 00:18:31.205 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.205 00:18:31.205 Latency(us) 00:18:31.205 [2024-12-13T10:12:51.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.205 [2024-12-13T10:12:51.774Z] =================================================================================================================== 00:18:31.205 [2024-12-13T10:12:51.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.205 11:12:51 -- common/autotest_common.sh@960 -- # wait 1631704 00:18:31.464 11:12:51 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:31.464 11:12:51 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:31.464 11:12:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:31.464 11:12:51 -- nvmf/common.sh@116 -- # sync 00:18:31.464 11:12:51 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:31.464 11:12:51 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:31.464 11:12:51 -- nvmf/common.sh@119 -- # set +e 00:18:31.465 11:12:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:31.465 11:12:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:31.465 rmmod nvme_rdma 00:18:31.465 rmmod nvme_fabrics 00:18:31.465 11:12:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:31.465 11:12:51 -- nvmf/common.sh@123 -- # set -e 00:18:31.465 11:12:51 -- nvmf/common.sh@124 -- # return 0 00:18:31.465 11:12:51 -- nvmf/common.sh@477 -- # '[' -n 1631631 ']' 00:18:31.465 11:12:51 -- nvmf/common.sh@478 -- # killprocess 1631631 00:18:31.465 11:12:51 -- common/autotest_common.sh@936 -- # '[' -z 1631631 ']' 00:18:31.465 11:12:51 -- common/autotest_common.sh@940 -- # kill -0 1631631 00:18:31.465 11:12:51 -- common/autotest_common.sh@941 -- # uname 00:18:31.465 11:12:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.465 11:12:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1631631 00:18:31.465 11:12:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:31.465 11:12:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:31.465 11:12:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1631631' 00:18:31.465 killing process with pid 1631631 00:18:31.465 11:12:51 -- common/autotest_common.sh@955 -- # kill 1631631 00:18:31.465 11:12:51 -- common/autotest_common.sh@960 -- # wait 1631631 00:18:31.725 11:12:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:31.725 11:12:52 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:31.725 00:18:31.725 real 0m18.359s 00:18:31.725 user 0m25.877s 00:18:31.725 sys 0m4.731s 00:18:31.725 11:12:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:31.725 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:18:31.725 ************************************ 00:18:31.725 END TEST nvmf_queue_depth 00:18:31.725 ************************************ 00:18:31.725 11:12:52 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:31.725 11:12:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:31.725 11:12:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.725 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:18:31.725 ************************************ 00:18:31.725 START TEST nvmf_multipath 00:18:31.725 ************************************ 00:18:31.725 11:12:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:31.985 * Looking for test storage... 00:18:31.985 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:31.985 11:12:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:31.985 11:12:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:31.985 11:12:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:31.985 11:12:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:31.985 11:12:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:31.985 11:12:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:31.985 11:12:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:31.985 11:12:52 -- scripts/common.sh@335 -- # IFS=.-: 00:18:31.985 11:12:52 -- scripts/common.sh@335 -- # read -ra ver1 00:18:31.985 11:12:52 -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.985 11:12:52 -- scripts/common.sh@336 -- # read -ra ver2 00:18:31.985 11:12:52 -- scripts/common.sh@337 -- # local 'op=<' 00:18:31.985 11:12:52 -- scripts/common.sh@339 -- # ver1_l=2 00:18:31.985 11:12:52 -- scripts/common.sh@340 -- # ver2_l=1 00:18:31.985 11:12:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:31.985 11:12:52 -- scripts/common.sh@343 -- # case "$op" in 00:18:31.985 11:12:52 -- scripts/common.sh@344 -- # : 1 00:18:31.985 11:12:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:31.985 11:12:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.985 11:12:52 -- scripts/common.sh@364 -- # decimal 1 00:18:31.985 11:12:52 -- scripts/common.sh@352 -- # local d=1 00:18:31.985 11:12:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.985 11:12:52 -- scripts/common.sh@354 -- # echo 1 00:18:31.985 11:12:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:31.985 11:12:52 -- scripts/common.sh@365 -- # decimal 2 00:18:31.985 11:12:52 -- scripts/common.sh@352 -- # local d=2 00:18:31.985 11:12:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.985 11:12:52 -- scripts/common.sh@354 -- # echo 2 00:18:31.985 11:12:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:31.985 11:12:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:31.985 11:12:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:31.985 11:12:52 -- scripts/common.sh@367 -- # return 0 00:18:31.985 11:12:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.985 11:12:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.985 --rc genhtml_branch_coverage=1 00:18:31.985 --rc genhtml_function_coverage=1 00:18:31.985 --rc genhtml_legend=1 00:18:31.985 --rc geninfo_all_blocks=1 00:18:31.985 --rc geninfo_unexecuted_blocks=1 00:18:31.985 00:18:31.985 ' 00:18:31.985 11:12:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.985 --rc genhtml_branch_coverage=1 00:18:31.985 --rc genhtml_function_coverage=1 00:18:31.985 --rc genhtml_legend=1 00:18:31.985 --rc geninfo_all_blocks=1 00:18:31.985 --rc geninfo_unexecuted_blocks=1 00:18:31.985 00:18:31.985 ' 00:18:31.985 11:12:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.985 --rc genhtml_branch_coverage=1 00:18:31.985 --rc genhtml_function_coverage=1 00:18:31.985 --rc genhtml_legend=1 00:18:31.985 --rc geninfo_all_blocks=1 00:18:31.985 --rc geninfo_unexecuted_blocks=1 00:18:31.985 00:18:31.985 ' 00:18:31.985 11:12:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:31.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.985 --rc genhtml_branch_coverage=1 00:18:31.985 --rc genhtml_function_coverage=1 00:18:31.985 --rc genhtml_legend=1 00:18:31.985 --rc geninfo_all_blocks=1 00:18:31.985 --rc geninfo_unexecuted_blocks=1 00:18:31.985 00:18:31.985 ' 00:18:31.985 11:12:52 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.985 11:12:52 -- nvmf/common.sh@7 -- # uname -s 00:18:31.985 11:12:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.985 11:12:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.985 11:12:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.985 11:12:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.985 11:12:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.985 11:12:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.985 11:12:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.985 11:12:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.985 11:12:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.985 11:12:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.985 11:12:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:31.985 11:12:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:31.985 11:12:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.985 11:12:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.985 11:12:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.985 11:12:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:31.985 11:12:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.985 11:12:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.985 11:12:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.986 11:12:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.986 11:12:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.986 11:12:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.986 11:12:52 -- paths/export.sh@5 -- # export PATH 00:18:31.986 11:12:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.986 11:12:52 -- nvmf/common.sh@46 -- # : 0 00:18:31.986 11:12:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:31.986 11:12:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:31.986 11:12:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:31.986 11:12:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.986 11:12:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.986 11:12:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:31.986 11:12:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:31.986 11:12:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:31.986 11:12:52 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.986 11:12:52 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.986 11:12:52 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:31.986 11:12:52 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:31.986 11:12:52 -- target/multipath.sh@43 -- # nvmftestinit 00:18:31.986 11:12:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:31.986 11:12:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.986 11:12:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:31.986 11:12:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:31.986 11:12:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:31.986 11:12:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.986 11:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.986 11:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.986 11:12:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:31.986 11:12:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:31.986 11:12:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:31.986 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:18:37.263 11:12:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:37.263 11:12:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:37.263 11:12:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:37.263 11:12:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:37.263 11:12:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:37.263 11:12:57 -- nvmf/common.sh@294 -- # net_devs=() 00:18:37.263 11:12:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@295 -- # e810=() 00:18:37.263 11:12:57 -- nvmf/common.sh@295 -- # local -ga e810 00:18:37.263 11:12:57 -- nvmf/common.sh@296 -- # x722=() 00:18:37.263 11:12:57 -- nvmf/common.sh@296 -- # local -ga x722 00:18:37.263 11:12:57 -- nvmf/common.sh@297 -- # mlx=() 00:18:37.263 11:12:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:37.263 11:12:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.263 11:12:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:37.263 11:12:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:37.263 11:12:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:37.263 11:12:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:37.263 11:12:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:37.263 11:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:37.263 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:37.263 11:12:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:37.263 11:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:37.263 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:37.263 11:12:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:37.263 11:12:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:37.263 11:12:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.263 11:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.263 11:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.263 11:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:37.263 Found net devices under 0000:18:00.0: mlx_0_0 00:18:37.263 11:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.263 11:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.263 11:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.263 11:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.263 11:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:37.263 Found net devices under 0000:18:00.1: mlx_0_1 00:18:37.263 11:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.263 11:12:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:37.263 11:12:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:37.263 11:12:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:37.263 11:12:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:37.263 11:12:57 -- nvmf/common.sh@57 -- # uname 00:18:37.263 11:12:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:37.263 11:12:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:37.263 11:12:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:37.263 11:12:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:37.263 11:12:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:37.263 11:12:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:37.263 11:12:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:37.263 11:12:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:37.263 11:12:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:37.263 11:12:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:37.263 11:12:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:37.263 11:12:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:37.263 11:12:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:37.263 11:12:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:37.263 11:12:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:37.263 11:12:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.263 11:12:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.263 11:12:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:37.263 11:12:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.263 11:12:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:37.263 11:12:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:37.263 11:12:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:37.263 11:12:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:37.263 11:12:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.263 11:12:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.263 11:12:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:37.263 11:12:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:37.263 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:37.263 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:37.263 altname enp24s0f0np0 00:18:37.263 altname ens785f0np0 00:18:37.263 inet 192.168.100.8/24 scope global mlx_0_0 00:18:37.263 valid_lft forever preferred_lft forever 00:18:37.263 11:12:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:37.263 11:12:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:37.263 11:12:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:37.263 11:12:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:37.263 11:12:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.263 11:12:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.263 11:12:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:37.263 11:12:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:37.263 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:37.263 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:37.263 altname enp24s0f1np1 00:18:37.263 altname ens785f1np1 00:18:37.263 inet 192.168.100.9/24 scope global mlx_0_1 00:18:37.263 valid_lft forever preferred_lft forever 00:18:37.263 11:12:57 -- nvmf/common.sh@410 -- # return 0 00:18:37.263 11:12:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:37.263 11:12:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:37.263 11:12:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:37.263 11:12:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:37.263 11:12:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:37.263 11:12:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:37.263 11:12:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:37.263 11:12:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:37.523 11:12:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:37.523 11:12:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.523 11:12:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.523 11:12:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:37.523 11:12:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:37.523 11:12:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.523 11:12:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:37.523 11:12:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.523 11:12:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:37.523 11:12:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.523 11:12:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:37.523 11:12:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:37.523 11:12:57 -- nvmf/common.sh@104 -- # continue 2 00:18:37.523 11:12:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:37.523 11:12:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:37.523 11:12:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:37.523 11:12:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:37.523 11:12:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.523 11:12:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.523 11:12:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:37.523 11:12:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:37.523 11:12:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:37.523 11:12:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:37.523 11:12:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:37.523 11:12:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:37.523 11:12:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:37.523 192.168.100.9' 00:18:37.523 11:12:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:37.523 192.168.100.9' 00:18:37.523 11:12:57 -- nvmf/common.sh@445 -- # head -n 1 00:18:37.523 11:12:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:37.523 11:12:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:37.523 192.168.100.9' 00:18:37.523 11:12:57 -- nvmf/common.sh@446 -- # tail -n +2 00:18:37.523 11:12:57 -- nvmf/common.sh@446 -- # head -n 1 00:18:37.523 11:12:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:37.523 11:12:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:37.523 11:12:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:37.523 11:12:57 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:18:37.523 11:12:57 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:18:37.523 11:12:57 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:18:37.523 run this test only with TCP transport for now 00:18:37.523 11:12:57 -- target/multipath.sh@53 -- # nvmftestfini 00:18:37.523 11:12:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:37.523 11:12:57 -- nvmf/common.sh@116 -- # sync 00:18:37.523 11:12:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@119 -- # set +e 00:18:37.523 11:12:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:37.523 11:12:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:37.523 rmmod nvme_rdma 00:18:37.523 rmmod nvme_fabrics 00:18:37.523 11:12:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:37.523 11:12:57 -- nvmf/common.sh@123 -- # set -e 00:18:37.523 11:12:57 -- nvmf/common.sh@124 -- # return 0 00:18:37.523 11:12:57 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:37.523 11:12:57 -- target/multipath.sh@54 -- # exit 0 00:18:37.523 11:12:57 -- target/multipath.sh@1 -- # nvmftestfini 00:18:37.523 11:12:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:37.523 11:12:57 -- nvmf/common.sh@116 -- # sync 00:18:37.523 11:12:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@119 -- # set +e 00:18:37.523 11:12:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:37.523 11:12:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:37.523 11:12:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:37.523 11:12:57 -- nvmf/common.sh@123 -- # set -e 00:18:37.523 11:12:57 -- nvmf/common.sh@124 -- # return 0 00:18:37.523 11:12:57 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:37.523 11:12:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:37.524 11:12:57 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:37.524 00:18:37.524 real 0m5.671s 00:18:37.524 user 0m1.672s 00:18:37.524 sys 0m4.133s 00:18:37.524 11:12:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:37.524 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:18:37.524 ************************************ 00:18:37.524 END TEST nvmf_multipath 00:18:37.524 ************************************ 00:18:37.524 11:12:57 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:18:37.524 11:12:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:37.524 11:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.524 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:18:37.524 ************************************ 00:18:37.524 START TEST nvmf_zcopy 00:18:37.524 ************************************ 00:18:37.524 11:12:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:18:37.524 * Looking for test storage... 00:18:37.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:37.524 11:12:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:37.524 11:12:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:37.524 11:12:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:37.784 11:12:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:37.784 11:12:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:37.784 11:12:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:37.784 11:12:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:37.784 11:12:58 -- scripts/common.sh@335 -- # IFS=.-: 00:18:37.784 11:12:58 -- scripts/common.sh@335 -- # read -ra ver1 00:18:37.784 11:12:58 -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.784 11:12:58 -- scripts/common.sh@336 -- # read -ra ver2 00:18:37.784 11:12:58 -- scripts/common.sh@337 -- # local 'op=<' 00:18:37.784 11:12:58 -- scripts/common.sh@339 -- # ver1_l=2 00:18:37.784 11:12:58 -- scripts/common.sh@340 -- # ver2_l=1 00:18:37.784 11:12:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:37.784 11:12:58 -- scripts/common.sh@343 -- # case "$op" in 00:18:37.784 11:12:58 -- scripts/common.sh@344 -- # : 1 00:18:37.784 11:12:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:37.784 11:12:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.784 11:12:58 -- scripts/common.sh@364 -- # decimal 1 00:18:37.784 11:12:58 -- scripts/common.sh@352 -- # local d=1 00:18:37.784 11:12:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.784 11:12:58 -- scripts/common.sh@354 -- # echo 1 00:18:37.784 11:12:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:37.784 11:12:58 -- scripts/common.sh@365 -- # decimal 2 00:18:37.784 11:12:58 -- scripts/common.sh@352 -- # local d=2 00:18:37.784 11:12:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.784 11:12:58 -- scripts/common.sh@354 -- # echo 2 00:18:37.784 11:12:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:37.784 11:12:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:37.784 11:12:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:37.784 11:12:58 -- scripts/common.sh@367 -- # return 0 00:18:37.784 11:12:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.784 11:12:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.784 --rc genhtml_branch_coverage=1 00:18:37.784 --rc genhtml_function_coverage=1 00:18:37.784 --rc genhtml_legend=1 00:18:37.784 --rc geninfo_all_blocks=1 00:18:37.784 --rc geninfo_unexecuted_blocks=1 00:18:37.784 00:18:37.784 ' 00:18:37.784 11:12:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.784 --rc genhtml_branch_coverage=1 00:18:37.784 --rc genhtml_function_coverage=1 00:18:37.784 --rc genhtml_legend=1 00:18:37.784 --rc geninfo_all_blocks=1 00:18:37.784 --rc geninfo_unexecuted_blocks=1 00:18:37.784 00:18:37.784 ' 00:18:37.784 11:12:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.784 --rc genhtml_branch_coverage=1 00:18:37.784 --rc genhtml_function_coverage=1 00:18:37.784 --rc genhtml_legend=1 00:18:37.784 --rc geninfo_all_blocks=1 00:18:37.784 --rc geninfo_unexecuted_blocks=1 00:18:37.784 00:18:37.784 ' 00:18:37.784 11:12:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.784 --rc genhtml_branch_coverage=1 00:18:37.784 --rc genhtml_function_coverage=1 00:18:37.784 --rc genhtml_legend=1 00:18:37.784 --rc geninfo_all_blocks=1 00:18:37.784 --rc geninfo_unexecuted_blocks=1 00:18:37.784 00:18:37.784 ' 00:18:37.784 11:12:58 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.784 11:12:58 -- nvmf/common.sh@7 -- # uname -s 00:18:37.784 11:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.784 11:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.784 11:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.784 11:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.784 11:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.784 11:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.784 11:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.784 11:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.784 11:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.784 11:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.784 11:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:37.784 11:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:37.784 11:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.784 11:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.784 11:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.784 11:12:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:37.784 11:12:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.784 11:12:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.784 11:12:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.784 11:12:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.784 11:12:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.784 11:12:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.784 11:12:58 -- paths/export.sh@5 -- # export PATH 00:18:37.784 11:12:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.784 11:12:58 -- nvmf/common.sh@46 -- # : 0 00:18:37.784 11:12:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:37.784 11:12:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:37.784 11:12:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:37.784 11:12:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.784 11:12:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.784 11:12:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:37.784 11:12:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:37.784 11:12:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:37.784 11:12:58 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:37.784 11:12:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:37.784 11:12:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.784 11:12:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:37.784 11:12:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:37.784 11:12:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:37.784 11:12:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.784 11:12:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.784 11:12:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.784 11:12:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:37.784 11:12:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:37.784 11:12:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:37.784 11:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:43.060 11:13:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:43.060 11:13:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:43.060 11:13:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:43.060 11:13:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:43.060 11:13:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:43.060 11:13:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:43.060 11:13:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:43.060 11:13:03 -- nvmf/common.sh@294 -- # net_devs=() 00:18:43.060 11:13:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:43.060 11:13:03 -- nvmf/common.sh@295 -- # e810=() 00:18:43.060 11:13:03 -- nvmf/common.sh@295 -- # local -ga e810 00:18:43.060 11:13:03 -- nvmf/common.sh@296 -- # x722=() 00:18:43.060 11:13:03 -- nvmf/common.sh@296 -- # local -ga x722 00:18:43.060 11:13:03 -- nvmf/common.sh@297 -- # mlx=() 00:18:43.060 11:13:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:43.060 11:13:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.060 11:13:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.061 11:13:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.061 11:13:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.061 11:13:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:43.061 11:13:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:43.061 11:13:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:43.061 11:13:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:43.061 11:13:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:43.061 11:13:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:43.061 11:13:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:43.061 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:43.061 11:13:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:43.061 11:13:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:43.061 11:13:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:43.061 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:43.061 11:13:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:43.061 11:13:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:43.061 11:13:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:43.061 11:13:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.061 11:13:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:43.061 11:13:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.061 11:13:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:43.061 Found net devices under 0000:18:00.0: mlx_0_0 00:18:43.061 11:13:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.061 11:13:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:43.061 11:13:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.061 11:13:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:43.061 11:13:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.061 11:13:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:43.061 Found net devices under 0000:18:00.1: mlx_0_1 00:18:43.061 11:13:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.061 11:13:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:43.061 11:13:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:43.061 11:13:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:43.061 11:13:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:43.061 11:13:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:43.061 11:13:03 -- nvmf/common.sh@57 -- # uname 00:18:43.061 11:13:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:43.061 11:13:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:43.061 11:13:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:43.061 11:13:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:43.061 11:13:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:43.061 11:13:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:43.320 11:13:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:43.320 11:13:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:43.320 11:13:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:43.320 11:13:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:43.320 11:13:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:43.320 11:13:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:43.320 11:13:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:43.320 11:13:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:43.320 11:13:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:43.320 11:13:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:43.320 11:13:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.320 11:13:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.320 11:13:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:43.320 11:13:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:43.320 11:13:03 -- nvmf/common.sh@104 -- # continue 2 00:18:43.320 11:13:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.320 11:13:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.320 11:13:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:43.320 11:13:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.320 11:13:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:43.320 11:13:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:43.320 11:13:03 -- nvmf/common.sh@104 -- # continue 2 00:18:43.320 11:13:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:43.320 11:13:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:43.320 11:13:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:43.320 11:13:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:43.320 11:13:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.321 11:13:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:43.321 11:13:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:43.321 11:13:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:43.321 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:43.321 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:43.321 altname enp24s0f0np0 00:18:43.321 altname ens785f0np0 00:18:43.321 inet 192.168.100.8/24 scope global mlx_0_0 00:18:43.321 valid_lft forever preferred_lft forever 00:18:43.321 11:13:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:43.321 11:13:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.321 11:13:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:43.321 11:13:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:43.321 11:13:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:43.321 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:43.321 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:43.321 altname enp24s0f1np1 00:18:43.321 altname ens785f1np1 00:18:43.321 inet 192.168.100.9/24 scope global mlx_0_1 00:18:43.321 valid_lft forever preferred_lft forever 00:18:43.321 11:13:03 -- nvmf/common.sh@410 -- # return 0 00:18:43.321 11:13:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:43.321 11:13:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:43.321 11:13:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:43.321 11:13:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:43.321 11:13:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:43.321 11:13:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:43.321 11:13:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:43.321 11:13:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:43.321 11:13:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:43.321 11:13:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:43.321 11:13:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.321 11:13:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.321 11:13:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:43.321 11:13:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:43.321 11:13:03 -- nvmf/common.sh@104 -- # continue 2 00:18:43.321 11:13:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:43.321 11:13:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.321 11:13:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:43.321 11:13:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:43.321 11:13:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:43.321 11:13:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@104 -- # continue 2 00:18:43.321 11:13:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:43.321 11:13:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:43.321 11:13:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.321 11:13:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:43.321 11:13:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:43.321 11:13:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:43.321 11:13:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:43.321 192.168.100.9' 00:18:43.321 11:13:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:43.321 192.168.100.9' 00:18:43.321 11:13:03 -- nvmf/common.sh@445 -- # head -n 1 00:18:43.321 11:13:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:43.321 11:13:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:43.321 192.168.100.9' 00:18:43.321 11:13:03 -- nvmf/common.sh@446 -- # tail -n +2 00:18:43.321 11:13:03 -- nvmf/common.sh@446 -- # head -n 1 00:18:43.321 11:13:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:43.321 11:13:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:43.321 11:13:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:43.321 11:13:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:43.321 11:13:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:43.321 11:13:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:43.321 11:13:03 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:43.321 11:13:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:43.321 11:13:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.321 11:13:03 -- common/autotest_common.sh@10 -- # set +x 00:18:43.321 11:13:03 -- nvmf/common.sh@469 -- # nvmfpid=1640344 00:18:43.321 11:13:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.321 11:13:03 -- nvmf/common.sh@470 -- # waitforlisten 1640344 00:18:43.321 11:13:03 -- common/autotest_common.sh@829 -- # '[' -z 1640344 ']' 00:18:43.321 11:13:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.321 11:13:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.321 11:13:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.321 11:13:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.321 11:13:03 -- common/autotest_common.sh@10 -- # set +x 00:18:43.321 [2024-12-13 11:13:03.843967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:43.321 [2024-12-13 11:13:03.844013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.321 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.580 [2024-12-13 11:13:03.897121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.580 [2024-12-13 11:13:03.963880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:43.580 [2024-12-13 11:13:03.963983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.580 [2024-12-13 11:13:03.963989] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.580 [2024-12-13 11:13:03.963995] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.580 [2024-12-13 11:13:03.964014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.148 11:13:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.148 11:13:04 -- common/autotest_common.sh@862 -- # return 0 00:18:44.148 11:13:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:44.148 11:13:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.148 11:13:04 -- common/autotest_common.sh@10 -- # set +x 00:18:44.149 11:13:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.149 11:13:04 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:18:44.149 11:13:04 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:18:44.149 Unsupported transport: rdma 00:18:44.149 11:13:04 -- target/zcopy.sh@17 -- # exit 0 00:18:44.149 11:13:04 -- target/zcopy.sh@1 -- # process_shm --id 0 00:18:44.149 11:13:04 -- common/autotest_common.sh@806 -- # type=--id 00:18:44.149 11:13:04 -- common/autotest_common.sh@807 -- # id=0 00:18:44.149 11:13:04 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:44.149 11:13:04 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:44.149 11:13:04 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:44.149 11:13:04 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:44.149 11:13:04 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:44.149 11:13:04 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:44.149 nvmf_trace.0 00:18:44.149 11:13:04 -- common/autotest_common.sh@821 -- # return 0 00:18:44.149 11:13:04 -- target/zcopy.sh@1 -- # nvmftestfini 00:18:44.149 11:13:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:44.149 11:13:04 -- nvmf/common.sh@116 -- # sync 00:18:44.149 11:13:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:44.149 11:13:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:44.149 11:13:04 -- nvmf/common.sh@119 -- # set +e 00:18:44.149 11:13:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:44.149 11:13:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:44.149 rmmod nvme_rdma 00:18:44.149 rmmod nvme_fabrics 00:18:44.407 11:13:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:44.407 11:13:04 -- nvmf/common.sh@123 -- # set -e 00:18:44.407 11:13:04 -- nvmf/common.sh@124 -- # return 0 00:18:44.407 11:13:04 -- nvmf/common.sh@477 -- # '[' -n 1640344 ']' 00:18:44.407 11:13:04 -- nvmf/common.sh@478 -- # killprocess 1640344 00:18:44.407 11:13:04 -- common/autotest_common.sh@936 -- # '[' -z 1640344 ']' 00:18:44.407 11:13:04 -- common/autotest_common.sh@940 -- # kill -0 1640344 00:18:44.407 11:13:04 -- common/autotest_common.sh@941 -- # uname 00:18:44.407 11:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.407 11:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1640344 00:18:44.407 11:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:44.407 11:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:44.407 11:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1640344' 00:18:44.407 killing process with pid 1640344 00:18:44.407 11:13:04 -- common/autotest_common.sh@955 -- # kill 1640344 00:18:44.407 11:13:04 -- common/autotest_common.sh@960 -- # wait 1640344 00:18:44.676 11:13:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.676 11:13:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:44.676 00:18:44.676 real 0m6.989s 00:18:44.676 user 0m3.119s 00:18:44.676 sys 0m4.487s 00:18:44.676 11:13:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:44.676 11:13:04 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 ************************************ 00:18:44.676 END TEST nvmf_zcopy 00:18:44.676 ************************************ 00:18:44.676 11:13:05 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:18:44.676 11:13:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:44.676 11:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.676 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 ************************************ 00:18:44.676 START TEST nvmf_nmic 00:18:44.676 ************************************ 00:18:44.676 11:13:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:18:44.676 * Looking for test storage... 00:18:44.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:44.676 11:13:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:44.676 11:13:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:44.676 11:13:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:44.676 11:13:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:44.676 11:13:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:44.676 11:13:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:44.676 11:13:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:44.676 11:13:05 -- scripts/common.sh@335 -- # IFS=.-: 00:18:44.676 11:13:05 -- scripts/common.sh@335 -- # read -ra ver1 00:18:44.676 11:13:05 -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.676 11:13:05 -- scripts/common.sh@336 -- # read -ra ver2 00:18:44.676 11:13:05 -- scripts/common.sh@337 -- # local 'op=<' 00:18:44.676 11:13:05 -- scripts/common.sh@339 -- # ver1_l=2 00:18:44.676 11:13:05 -- scripts/common.sh@340 -- # ver2_l=1 00:18:44.676 11:13:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:44.676 11:13:05 -- scripts/common.sh@343 -- # case "$op" in 00:18:44.676 11:13:05 -- scripts/common.sh@344 -- # : 1 00:18:44.676 11:13:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:44.676 11:13:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.676 11:13:05 -- scripts/common.sh@364 -- # decimal 1 00:18:44.676 11:13:05 -- scripts/common.sh@352 -- # local d=1 00:18:44.676 11:13:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.676 11:13:05 -- scripts/common.sh@354 -- # echo 1 00:18:44.676 11:13:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:44.676 11:13:05 -- scripts/common.sh@365 -- # decimal 2 00:18:44.676 11:13:05 -- scripts/common.sh@352 -- # local d=2 00:18:44.676 11:13:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.676 11:13:05 -- scripts/common.sh@354 -- # echo 2 00:18:44.676 11:13:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:44.676 11:13:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:44.676 11:13:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:44.676 11:13:05 -- scripts/common.sh@367 -- # return 0 00:18:44.676 11:13:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.676 11:13:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.676 --rc genhtml_branch_coverage=1 00:18:44.676 --rc genhtml_function_coverage=1 00:18:44.676 --rc genhtml_legend=1 00:18:44.676 --rc geninfo_all_blocks=1 00:18:44.676 --rc geninfo_unexecuted_blocks=1 00:18:44.676 00:18:44.676 ' 00:18:44.676 11:13:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.676 --rc genhtml_branch_coverage=1 00:18:44.676 --rc genhtml_function_coverage=1 00:18:44.676 --rc genhtml_legend=1 00:18:44.676 --rc geninfo_all_blocks=1 00:18:44.676 --rc geninfo_unexecuted_blocks=1 00:18:44.676 00:18:44.676 ' 00:18:44.676 11:13:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.676 --rc genhtml_branch_coverage=1 00:18:44.676 --rc genhtml_function_coverage=1 00:18:44.676 --rc genhtml_legend=1 00:18:44.676 --rc geninfo_all_blocks=1 00:18:44.676 --rc geninfo_unexecuted_blocks=1 00:18:44.676 00:18:44.676 ' 00:18:44.676 11:13:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.676 --rc genhtml_branch_coverage=1 00:18:44.676 --rc genhtml_function_coverage=1 00:18:44.676 --rc genhtml_legend=1 00:18:44.676 --rc geninfo_all_blocks=1 00:18:44.676 --rc geninfo_unexecuted_blocks=1 00:18:44.676 00:18:44.676 ' 00:18:44.676 11:13:05 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.676 11:13:05 -- nvmf/common.sh@7 -- # uname -s 00:18:44.676 11:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.676 11:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.676 11:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.676 11:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.676 11:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.676 11:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.676 11:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.676 11:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.676 11:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.677 11:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.677 11:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:44.677 11:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:44.677 11:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.677 11:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.677 11:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.677 11:13:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:44.677 11:13:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.677 11:13:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.677 11:13:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.677 11:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 11:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 11:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 11:13:05 -- paths/export.sh@5 -- # export PATH 00:18:44.677 11:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 11:13:05 -- nvmf/common.sh@46 -- # : 0 00:18:44.677 11:13:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:44.677 11:13:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:44.677 11:13:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:44.677 11:13:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.677 11:13:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.677 11:13:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:44.677 11:13:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:44.677 11:13:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:44.677 11:13:05 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.677 11:13:05 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.677 11:13:05 -- target/nmic.sh@14 -- # nvmftestinit 00:18:44.677 11:13:05 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:44.677 11:13:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.677 11:13:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:44.677 11:13:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:44.677 11:13:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:44.677 11:13:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.677 11:13:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.677 11:13:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.677 11:13:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:44.677 11:13:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:44.677 11:13:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:44.677 11:13:05 -- common/autotest_common.sh@10 -- # set +x 00:18:51.250 11:13:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:51.250 11:13:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:51.250 11:13:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:51.250 11:13:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:51.250 11:13:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:51.250 11:13:10 -- nvmf/common.sh@294 -- # net_devs=() 00:18:51.250 11:13:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@295 -- # e810=() 00:18:51.250 11:13:10 -- nvmf/common.sh@295 -- # local -ga e810 00:18:51.250 11:13:10 -- nvmf/common.sh@296 -- # x722=() 00:18:51.250 11:13:10 -- nvmf/common.sh@296 -- # local -ga x722 00:18:51.250 11:13:10 -- nvmf/common.sh@297 -- # mlx=() 00:18:51.250 11:13:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:51.250 11:13:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.250 11:13:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:51.250 11:13:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:51.250 11:13:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:51.250 11:13:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:51.250 11:13:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:51.250 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:51.250 11:13:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:51.250 11:13:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:51.250 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:51.250 11:13:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:51.250 11:13:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.250 11:13:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.250 11:13:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:51.250 Found net devices under 0000:18:00.0: mlx_0_0 00:18:51.250 11:13:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.250 11:13:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.250 11:13:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.250 11:13:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:51.250 Found net devices under 0000:18:00.1: mlx_0_1 00:18:51.250 11:13:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.250 11:13:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:51.250 11:13:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:51.250 11:13:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:51.250 11:13:10 -- nvmf/common.sh@57 -- # uname 00:18:51.250 11:13:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:51.250 11:13:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:51.250 11:13:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:51.250 11:13:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:51.250 11:13:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:51.250 11:13:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:51.250 11:13:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:51.250 11:13:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:51.250 11:13:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:51.250 11:13:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:51.250 11:13:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:51.250 11:13:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:51.250 11:13:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:51.250 11:13:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:51.250 11:13:10 -- nvmf/common.sh@104 -- # continue 2 00:18:51.250 11:13:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:51.250 11:13:10 -- nvmf/common.sh@104 -- # continue 2 00:18:51.250 11:13:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:51.250 11:13:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:51.250 11:13:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:51.250 11:13:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:51.250 11:13:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:51.250 11:13:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:51.250 11:13:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:51.250 11:13:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:51.250 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:51.250 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:51.250 altname enp24s0f0np0 00:18:51.250 altname ens785f0np0 00:18:51.250 inet 192.168.100.8/24 scope global mlx_0_0 00:18:51.250 valid_lft forever preferred_lft forever 00:18:51.250 11:13:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:51.250 11:13:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:51.250 11:13:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:51.250 11:13:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:51.250 11:13:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:51.250 11:13:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:51.250 11:13:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:51.250 11:13:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:51.250 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:51.250 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:51.250 altname enp24s0f1np1 00:18:51.250 altname ens785f1np1 00:18:51.250 inet 192.168.100.9/24 scope global mlx_0_1 00:18:51.250 valid_lft forever preferred_lft forever 00:18:51.250 11:13:10 -- nvmf/common.sh@410 -- # return 0 00:18:51.250 11:13:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:51.250 11:13:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:51.250 11:13:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:51.250 11:13:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:51.250 11:13:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:51.250 11:13:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:51.250 11:13:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:51.250 11:13:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:51.250 11:13:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:51.250 11:13:10 -- nvmf/common.sh@104 -- # continue 2 00:18:51.250 11:13:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:51.250 11:13:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:51.250 11:13:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:51.250 11:13:10 -- nvmf/common.sh@104 -- # continue 2 00:18:51.250 11:13:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:51.250 11:13:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:51.251 11:13:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:51.251 11:13:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:51.251 11:13:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:51.251 11:13:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:51.251 11:13:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:51.251 11:13:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:51.251 11:13:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:51.251 11:13:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:51.251 11:13:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:51.251 11:13:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:51.251 11:13:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:51.251 192.168.100.9' 00:18:51.251 11:13:10 -- nvmf/common.sh@445 -- # head -n 1 00:18:51.251 11:13:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:51.251 192.168.100.9' 00:18:51.251 11:13:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:51.251 11:13:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:51.251 192.168.100.9' 00:18:51.251 11:13:10 -- nvmf/common.sh@446 -- # tail -n +2 00:18:51.251 11:13:10 -- nvmf/common.sh@446 -- # head -n 1 00:18:51.251 11:13:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:51.251 11:13:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:51.251 11:13:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:51.251 11:13:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:51.251 11:13:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:51.251 11:13:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:51.251 11:13:10 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:51.251 11:13:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:51.251 11:13:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.251 11:13:10 -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 11:13:10 -- nvmf/common.sh@469 -- # nvmfpid=1643751 00:18:51.251 11:13:10 -- nvmf/common.sh@470 -- # waitforlisten 1643751 00:18:51.251 11:13:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.251 11:13:10 -- common/autotest_common.sh@829 -- # '[' -z 1643751 ']' 00:18:51.251 11:13:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.251 11:13:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.251 11:13:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.251 11:13:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.251 11:13:10 -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 [2024-12-13 11:13:10.838948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:51.251 [2024-12-13 11:13:10.838996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.251 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.251 [2024-12-13 11:13:10.893831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.251 [2024-12-13 11:13:10.962739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:51.251 [2024-12-13 11:13:10.962853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.251 [2024-12-13 11:13:10.962861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.251 [2024-12-13 11:13:10.962867] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.251 [2024-12-13 11:13:10.963027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.251 [2024-12-13 11:13:10.963102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.251 [2024-12-13 11:13:10.963307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.251 [2024-12-13 11:13:10.963310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.251 11:13:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.251 11:13:11 -- common/autotest_common.sh@862 -- # return 0 00:18:51.251 11:13:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:51.251 11:13:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.251 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 11:13:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.251 11:13:11 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:51.251 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.251 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.251 [2024-12-13 11:13:11.704490] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c1e960/0x1c22e50) succeed. 00:18:51.251 [2024-12-13 11:13:11.712622] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c1ff50/0x1c644f0) succeed. 00:18:51.510 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.510 11:13:11 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.510 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.510 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.510 Malloc0 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 [2024-12-13 11:13:11.870286] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:51.511 test case1: single bdev can't be used in multiple subsystems 00:18:51.511 11:13:11 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@28 -- # nmic_status=0 00:18:51.511 11:13:11 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 [2024-12-13 11:13:11.894028] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:51.511 [2024-12-13 11:13:11.894046] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:51.511 [2024-12-13 11:13:11.894053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.511 request: 00:18:51.511 { 00:18:51.511 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:51.511 "namespace": { 00:18:51.511 "bdev_name": "Malloc0" 00:18:51.511 }, 00:18:51.511 "method": "nvmf_subsystem_add_ns", 00:18:51.511 "req_id": 1 00:18:51.511 } 00:18:51.511 Got JSON-RPC error response 00:18:51.511 response: 00:18:51.511 { 00:18:51.511 "code": -32602, 00:18:51.511 "message": "Invalid parameters" 00:18:51.511 } 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@29 -- # nmic_status=1 00:18:51.511 11:13:11 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:51.511 11:13:11 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:51.511 Adding namespace failed - expected result. 00:18:51.511 11:13:11 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:51.511 test case2: host connect to nvmf target in multiple paths 00:18:51.511 11:13:11 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:51.511 11:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.511 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.511 [2024-12-13 11:13:11.906105] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:51.511 11:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.511 11:13:11 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:52.448 11:13:12 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:18:53.385 11:13:13 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:53.385 11:13:13 -- common/autotest_common.sh@1187 -- # local i=0 00:18:53.385 11:13:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.385 11:13:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:53.385 11:13:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:55.919 11:13:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:55.919 11:13:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:55.919 11:13:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:55.919 11:13:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:55.919 11:13:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.919 11:13:15 -- common/autotest_common.sh@1197 -- # return 0 00:18:55.919 11:13:15 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:55.919 [global] 00:18:55.919 thread=1 00:18:55.919 invalidate=1 00:18:55.919 rw=write 00:18:55.919 time_based=1 00:18:55.919 runtime=1 00:18:55.919 ioengine=libaio 00:18:55.919 direct=1 00:18:55.919 bs=4096 00:18:55.919 iodepth=1 00:18:55.919 norandommap=0 00:18:55.919 numjobs=1 00:18:55.919 00:18:55.919 verify_dump=1 00:18:55.919 verify_backlog=512 00:18:55.919 verify_state_save=0 00:18:55.919 do_verify=1 00:18:55.919 verify=crc32c-intel 00:18:55.919 [job0] 00:18:55.919 filename=/dev/nvme0n1 00:18:55.919 Could not set queue depth (nvme0n1) 00:18:55.919 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:55.919 fio-3.35 00:18:55.919 Starting 1 thread 00:18:56.856 00:18:56.856 job0: (groupid=0, jobs=1): err= 0: pid=1644877: Fri Dec 13 11:13:17 2024 00:18:56.856 read: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:18:56.856 slat (nsec): min=6286, max=26347, avg=6895.22, stdev=748.76 00:18:56.856 clat (usec): min=39, max=216, avg=55.82, stdev= 5.17 00:18:56.856 lat (usec): min=51, max=222, avg=62.71, stdev= 5.21 00:18:56.856 clat percentiles (usec): 00:18:56.856 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:18:56.856 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 58], 00:18:56.856 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 64], 00:18:56.856 | 99.00th=[ 69], 99.50th=[ 70], 99.90th=[ 74], 99.95th=[ 77], 00:18:56.856 | 99.99th=[ 217] 00:18:56.856 write: IOPS=7845, BW=30.6MiB/s (32.1MB/s)(30.7MiB/1001msec); 0 zone resets 00:18:56.856 slat (nsec): min=8279, max=38507, avg=9003.86, stdev=940.41 00:18:56.856 clat (nsec): min=39574, max=94014, avg=53097.72, stdev=4986.08 00:18:56.856 lat (usec): min=51, max=132, avg=62.10, stdev= 5.13 00:18:56.856 clat percentiles (nsec): 00:18:56.856 | 1.00th=[44288], 5.00th=[45824], 10.00th=[46848], 20.00th=[48384], 00:18:56.856 | 30.00th=[49920], 40.00th=[50944], 50.00th=[52992], 60.00th=[54528], 00:18:56.856 | 70.00th=[56064], 80.00th=[57600], 90.00th=[59648], 95.00th=[61184], 00:18:56.856 | 99.00th=[65280], 99.50th=[67072], 99.90th=[72192], 99.95th=[78336], 00:18:56.856 | 99.99th=[93696] 00:18:56.856 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:18:56.856 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:18:56.856 lat (usec) : 50=21.95%, 100=78.04%, 250=0.01% 00:18:56.856 cpu : usr=6.80%, sys=13.00%, ctx=15533, majf=0, minf=1 00:18:56.856 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:56.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.856 issued rwts: total=7680,7853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.856 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:56.856 00:18:56.856 Run status group 0 (all jobs): 00:18:56.856 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:18:56.856 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:18:56.856 00:18:56.856 Disk stats (read/write): 00:18:56.856 nvme0n1: ios=6832/7168, merge=0/0, ticks=359/361, in_queue=720, util=90.58% 00:18:56.856 11:13:17 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:58.760 11:13:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:58.760 11:13:19 -- common/autotest_common.sh@1208 -- # local i=0 00:18:58.760 11:13:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:58.760 11:13:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.760 11:13:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:58.760 11:13:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.760 11:13:19 -- common/autotest_common.sh@1220 -- # return 0 00:18:58.760 11:13:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:58.760 11:13:19 -- target/nmic.sh@53 -- # nvmftestfini 00:18:58.760 11:13:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:58.760 11:13:19 -- nvmf/common.sh@116 -- # sync 00:18:58.760 11:13:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:58.760 11:13:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:58.760 11:13:19 -- nvmf/common.sh@119 -- # set +e 00:18:58.760 11:13:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:58.760 11:13:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:58.760 rmmod nvme_rdma 00:18:58.760 rmmod nvme_fabrics 00:18:58.760 11:13:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:58.760 11:13:19 -- nvmf/common.sh@123 -- # set -e 00:18:58.760 11:13:19 -- nvmf/common.sh@124 -- # return 0 00:18:58.760 11:13:19 -- nvmf/common.sh@477 -- # '[' -n 1643751 ']' 00:18:58.760 11:13:19 -- nvmf/common.sh@478 -- # killprocess 1643751 00:18:58.760 11:13:19 -- common/autotest_common.sh@936 -- # '[' -z 1643751 ']' 00:18:58.760 11:13:19 -- common/autotest_common.sh@940 -- # kill -0 1643751 00:18:58.760 11:13:19 -- common/autotest_common.sh@941 -- # uname 00:18:58.760 11:13:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.019 11:13:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1643751 00:18:59.019 11:13:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.019 11:13:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.019 11:13:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1643751' 00:18:59.019 killing process with pid 1643751 00:18:59.019 11:13:19 -- common/autotest_common.sh@955 -- # kill 1643751 00:18:59.019 11:13:19 -- common/autotest_common.sh@960 -- # wait 1643751 00:18:59.414 11:13:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:59.414 11:13:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:59.414 00:18:59.414 real 0m14.637s 00:18:59.414 user 0m44.744s 00:18:59.414 sys 0m5.080s 00:18:59.414 11:13:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:59.414 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:18:59.414 ************************************ 00:18:59.414 END TEST nvmf_nmic 00:18:59.414 ************************************ 00:18:59.414 11:13:19 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:18:59.414 11:13:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:59.414 11:13:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.414 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:18:59.414 ************************************ 00:18:59.414 START TEST nvmf_fio_target 00:18:59.414 ************************************ 00:18:59.414 11:13:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:18:59.414 * Looking for test storage... 00:18:59.414 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:59.414 11:13:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:59.414 11:13:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:59.414 11:13:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:59.414 11:13:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:59.414 11:13:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:59.414 11:13:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:59.414 11:13:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:59.414 11:13:19 -- scripts/common.sh@335 -- # IFS=.-: 00:18:59.414 11:13:19 -- scripts/common.sh@335 -- # read -ra ver1 00:18:59.414 11:13:19 -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.414 11:13:19 -- scripts/common.sh@336 -- # read -ra ver2 00:18:59.414 11:13:19 -- scripts/common.sh@337 -- # local 'op=<' 00:18:59.414 11:13:19 -- scripts/common.sh@339 -- # ver1_l=2 00:18:59.414 11:13:19 -- scripts/common.sh@340 -- # ver2_l=1 00:18:59.414 11:13:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:59.414 11:13:19 -- scripts/common.sh@343 -- # case "$op" in 00:18:59.414 11:13:19 -- scripts/common.sh@344 -- # : 1 00:18:59.414 11:13:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:59.414 11:13:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.414 11:13:19 -- scripts/common.sh@364 -- # decimal 1 00:18:59.414 11:13:19 -- scripts/common.sh@352 -- # local d=1 00:18:59.414 11:13:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.414 11:13:19 -- scripts/common.sh@354 -- # echo 1 00:18:59.414 11:13:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:59.414 11:13:19 -- scripts/common.sh@365 -- # decimal 2 00:18:59.414 11:13:19 -- scripts/common.sh@352 -- # local d=2 00:18:59.414 11:13:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.414 11:13:19 -- scripts/common.sh@354 -- # echo 2 00:18:59.414 11:13:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:59.414 11:13:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:59.414 11:13:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:59.414 11:13:19 -- scripts/common.sh@367 -- # return 0 00:18:59.414 11:13:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.414 11:13:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.414 --rc genhtml_branch_coverage=1 00:18:59.414 --rc genhtml_function_coverage=1 00:18:59.414 --rc genhtml_legend=1 00:18:59.414 --rc geninfo_all_blocks=1 00:18:59.414 --rc geninfo_unexecuted_blocks=1 00:18:59.414 00:18:59.414 ' 00:18:59.414 11:13:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.414 --rc genhtml_branch_coverage=1 00:18:59.414 --rc genhtml_function_coverage=1 00:18:59.414 --rc genhtml_legend=1 00:18:59.414 --rc geninfo_all_blocks=1 00:18:59.414 --rc geninfo_unexecuted_blocks=1 00:18:59.414 00:18:59.414 ' 00:18:59.414 11:13:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.414 --rc genhtml_branch_coverage=1 00:18:59.414 --rc genhtml_function_coverage=1 00:18:59.414 --rc genhtml_legend=1 00:18:59.414 --rc geninfo_all_blocks=1 00:18:59.414 --rc geninfo_unexecuted_blocks=1 00:18:59.414 00:18:59.414 ' 00:18:59.414 11:13:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.414 --rc genhtml_branch_coverage=1 00:18:59.414 --rc genhtml_function_coverage=1 00:18:59.414 --rc genhtml_legend=1 00:18:59.414 --rc geninfo_all_blocks=1 00:18:59.414 --rc geninfo_unexecuted_blocks=1 00:18:59.414 00:18:59.414 ' 00:18:59.414 11:13:19 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.414 11:13:19 -- nvmf/common.sh@7 -- # uname -s 00:18:59.414 11:13:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.414 11:13:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.414 11:13:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.414 11:13:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.414 11:13:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.414 11:13:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.414 11:13:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.414 11:13:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.414 11:13:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.414 11:13:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.414 11:13:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:59.414 11:13:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:59.414 11:13:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.414 11:13:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.414 11:13:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.414 11:13:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.414 11:13:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.414 11:13:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.414 11:13:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.414 11:13:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.414 11:13:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.415 11:13:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.415 11:13:19 -- paths/export.sh@5 -- # export PATH 00:18:59.415 11:13:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.415 11:13:19 -- nvmf/common.sh@46 -- # : 0 00:18:59.415 11:13:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:59.415 11:13:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:59.415 11:13:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:59.415 11:13:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.415 11:13:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.415 11:13:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:59.415 11:13:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:59.415 11:13:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:59.415 11:13:19 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.415 11:13:19 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.415 11:13:19 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:59.415 11:13:19 -- target/fio.sh@16 -- # nvmftestinit 00:18:59.415 11:13:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:59.415 11:13:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.415 11:13:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:59.415 11:13:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:59.415 11:13:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:59.415 11:13:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.415 11:13:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.415 11:13:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.415 11:13:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:59.415 11:13:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:59.415 11:13:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:59.415 11:13:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.703 11:13:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:04.703 11:13:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:04.703 11:13:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:04.703 11:13:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:04.703 11:13:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:04.703 11:13:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:04.703 11:13:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:04.703 11:13:25 -- nvmf/common.sh@294 -- # net_devs=() 00:19:04.703 11:13:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:04.703 11:13:25 -- nvmf/common.sh@295 -- # e810=() 00:19:04.703 11:13:25 -- nvmf/common.sh@295 -- # local -ga e810 00:19:04.703 11:13:25 -- nvmf/common.sh@296 -- # x722=() 00:19:04.703 11:13:25 -- nvmf/common.sh@296 -- # local -ga x722 00:19:04.703 11:13:25 -- nvmf/common.sh@297 -- # mlx=() 00:19:04.703 11:13:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:04.703 11:13:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.703 11:13:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:04.703 11:13:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:04.703 11:13:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:04.703 11:13:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:04.703 11:13:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:04.703 11:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:04.703 11:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:04.703 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:04.703 11:13:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.703 11:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:04.703 11:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:04.703 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:04.703 11:13:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.703 11:13:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:04.703 11:13:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:04.703 11:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:04.703 11:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.962 11:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:04.962 11:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.962 11:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:04.962 Found net devices under 0000:18:00.0: mlx_0_0 00:19:04.962 11:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.962 11:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:04.962 11:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.962 11:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:04.962 11:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.962 11:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:04.962 Found net devices under 0000:18:00.1: mlx_0_1 00:19:04.962 11:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.962 11:13:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:04.962 11:13:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:04.962 11:13:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:04.962 11:13:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:04.962 11:13:25 -- nvmf/common.sh@57 -- # uname 00:19:04.962 11:13:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:04.962 11:13:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:04.962 11:13:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:04.962 11:13:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:04.962 11:13:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:04.962 11:13:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:04.962 11:13:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:04.962 11:13:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:04.962 11:13:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:04.962 11:13:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:04.962 11:13:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:04.962 11:13:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.962 11:13:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:04.962 11:13:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:04.962 11:13:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.962 11:13:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:04.962 11:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.962 11:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.962 11:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:04.962 11:13:25 -- nvmf/common.sh@104 -- # continue 2 00:19:04.962 11:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.962 11:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.962 11:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.962 11:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:04.962 11:13:25 -- nvmf/common.sh@104 -- # continue 2 00:19:04.962 11:13:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:04.962 11:13:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:04.962 11:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:04.962 11:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:04.962 11:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.962 11:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.962 11:13:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:04.962 11:13:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:04.962 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.962 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:04.962 altname enp24s0f0np0 00:19:04.962 altname ens785f0np0 00:19:04.962 inet 192.168.100.8/24 scope global mlx_0_0 00:19:04.962 valid_lft forever preferred_lft forever 00:19:04.962 11:13:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:04.962 11:13:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:04.962 11:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:04.962 11:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.962 11:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:04.962 11:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.962 11:13:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:04.962 11:13:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:04.962 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.962 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:04.962 altname enp24s0f1np1 00:19:04.962 altname ens785f1np1 00:19:04.962 inet 192.168.100.9/24 scope global mlx_0_1 00:19:04.962 valid_lft forever preferred_lft forever 00:19:04.962 11:13:25 -- nvmf/common.sh@410 -- # return 0 00:19:04.962 11:13:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:04.962 11:13:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:04.962 11:13:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:04.962 11:13:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:04.962 11:13:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:04.962 11:13:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.962 11:13:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:04.963 11:13:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:04.963 11:13:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.963 11:13:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:04.963 11:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.963 11:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.963 11:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.963 11:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:04.963 11:13:25 -- nvmf/common.sh@104 -- # continue 2 00:19:04.963 11:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.963 11:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.963 11:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.963 11:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.963 11:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.963 11:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:04.963 11:13:25 -- nvmf/common.sh@104 -- # continue 2 00:19:04.963 11:13:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:04.963 11:13:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:04.963 11:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:04.963 11:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:04.963 11:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.963 11:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.963 11:13:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:04.963 11:13:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:04.963 11:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:04.963 11:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:04.963 11:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.963 11:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.963 11:13:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:04.963 192.168.100.9' 00:19:04.963 11:13:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:04.963 192.168.100.9' 00:19:04.963 11:13:25 -- nvmf/common.sh@445 -- # head -n 1 00:19:04.963 11:13:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:04.963 11:13:25 -- nvmf/common.sh@446 -- # tail -n +2 00:19:04.963 11:13:25 -- nvmf/common.sh@446 -- # head -n 1 00:19:04.963 11:13:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:04.963 192.168.100.9' 00:19:04.963 11:13:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:04.963 11:13:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:04.963 11:13:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:04.963 11:13:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:04.963 11:13:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:04.963 11:13:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:04.963 11:13:25 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:04.963 11:13:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:04.963 11:13:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.963 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:19:04.963 11:13:25 -- nvmf/common.sh@469 -- # nvmfpid=1648691 00:19:04.963 11:13:25 -- nvmf/common.sh@470 -- # waitforlisten 1648691 00:19:04.963 11:13:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:04.963 11:13:25 -- common/autotest_common.sh@829 -- # '[' -z 1648691 ']' 00:19:04.963 11:13:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.963 11:13:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.963 11:13:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.963 11:13:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.963 11:13:25 -- common/autotest_common.sh@10 -- # set +x 00:19:04.963 [2024-12-13 11:13:25.505110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.963 [2024-12-13 11:13:25.505158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.963 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.222 [2024-12-13 11:13:25.557692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.222 [2024-12-13 11:13:25.630768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:05.222 [2024-12-13 11:13:25.630867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.222 [2024-12-13 11:13:25.630874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.222 [2024-12-13 11:13:25.630880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.222 [2024-12-13 11:13:25.630921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.222 [2024-12-13 11:13:25.631032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.222 [2024-12-13 11:13:25.631105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.222 [2024-12-13 11:13:25.631106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.789 11:13:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.789 11:13:26 -- common/autotest_common.sh@862 -- # return 0 00:19:05.789 11:13:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:05.789 11:13:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:05.789 11:13:26 -- common/autotest_common.sh@10 -- # set +x 00:19:05.789 11:13:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.789 11:13:26 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:06.048 [2024-12-13 11:13:26.505863] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfe0960/0xfe4e50) succeed. 00:19:06.048 [2024-12-13 11:13:26.514039] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfe1f50/0x10264f0) succeed. 00:19:06.307 11:13:26 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.307 11:13:26 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:06.307 11:13:26 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.565 11:13:27 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:06.565 11:13:27 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.824 11:13:27 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:06.824 11:13:27 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.082 11:13:27 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:07.082 11:13:27 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:07.082 11:13:27 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.341 11:13:27 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:07.341 11:13:27 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.599 11:13:27 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:07.599 11:13:27 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.599 11:13:28 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:07.599 11:13:28 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:07.858 11:13:28 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:08.117 11:13:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:08.117 11:13:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:08.117 11:13:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:08.117 11:13:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.376 11:13:28 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:08.640 [2024-12-13 11:13:28.993382] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:08.640 11:13:29 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:08.640 11:13:29 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:08.898 11:13:29 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:09.834 11:13:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:09.834 11:13:30 -- common/autotest_common.sh@1187 -- # local i=0 00:19:09.834 11:13:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.834 11:13:30 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:19:09.834 11:13:30 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:19:09.834 11:13:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:12.367 11:13:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:12.367 11:13:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:12.367 11:13:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:12.367 11:13:32 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:19:12.367 11:13:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.367 11:13:32 -- common/autotest_common.sh@1197 -- # return 0 00:19:12.367 11:13:32 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:12.367 [global] 00:19:12.367 thread=1 00:19:12.367 invalidate=1 00:19:12.367 rw=write 00:19:12.367 time_based=1 00:19:12.367 runtime=1 00:19:12.367 ioengine=libaio 00:19:12.367 direct=1 00:19:12.367 bs=4096 00:19:12.367 iodepth=1 00:19:12.367 norandommap=0 00:19:12.367 numjobs=1 00:19:12.368 00:19:12.368 verify_dump=1 00:19:12.368 verify_backlog=512 00:19:12.368 verify_state_save=0 00:19:12.368 do_verify=1 00:19:12.368 verify=crc32c-intel 00:19:12.368 [job0] 00:19:12.368 filename=/dev/nvme0n1 00:19:12.368 [job1] 00:19:12.368 filename=/dev/nvme0n2 00:19:12.368 [job2] 00:19:12.368 filename=/dev/nvme0n3 00:19:12.368 [job3] 00:19:12.368 filename=/dev/nvme0n4 00:19:12.368 Could not set queue depth (nvme0n1) 00:19:12.368 Could not set queue depth (nvme0n2) 00:19:12.368 Could not set queue depth (nvme0n3) 00:19:12.368 Could not set queue depth (nvme0n4) 00:19:12.368 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.368 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.368 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.368 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.368 fio-3.35 00:19:12.368 Starting 4 threads 00:19:13.744 00:19:13.744 job0: (groupid=0, jobs=1): err= 0: pid=1650237: Fri Dec 13 11:13:33 2024 00:19:13.744 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:19:13.744 slat (nsec): min=6375, max=21017, avg=7122.48, stdev=641.25 00:19:13.744 clat (usec): min=59, max=261, avg=72.21, stdev= 6.86 00:19:13.744 lat (usec): min=67, max=268, avg=79.33, stdev= 6.90 00:19:13.744 clat percentiles (usec): 00:19:13.744 | 1.00th=[ 64], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 69], 00:19:13.744 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:19:13.744 | 70.00th=[ 75], 80.00th=[ 76], 90.00th=[ 78], 95.00th=[ 80], 00:19:13.744 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 130], 99.95th=[ 231], 00:19:13.744 | 99.99th=[ 262] 00:19:13.744 write: IOPS=6201, BW=24.2MiB/s (25.4MB/s)(24.2MiB/1001msec); 0 zone resets 00:19:13.744 slat (nsec): min=8379, max=38054, avg=9300.04, stdev=1012.18 00:19:13.744 clat (usec): min=55, max=294, avg=69.38, stdev= 7.89 00:19:13.744 lat (usec): min=65, max=304, avg=78.68, stdev= 7.97 00:19:13.744 clat percentiles (usec): 00:19:13.744 | 1.00th=[ 61], 5.00th=[ 63], 10.00th=[ 64], 20.00th=[ 66], 00:19:13.744 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 71], 00:19:13.744 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:19:13.744 | 99.00th=[ 82], 99.50th=[ 86], 99.90th=[ 225], 99.95th=[ 231], 00:19:13.744 | 99.99th=[ 293] 00:19:13.744 bw ( KiB/s): min=24640, max=24640, per=36.36%, avg=24640.00, stdev= 0.00, samples=1 00:19:13.744 iops : min= 6160, max= 6160, avg=6160.00, stdev= 0.00, samples=1 00:19:13.744 lat (usec) : 100=99.76%, 250=0.20%, 500=0.04% 00:19:13.744 cpu : usr=5.70%, sys=10.20%, ctx=12352, majf=0, minf=2 00:19:13.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.744 issued rwts: total=6144,6208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.744 job1: (groupid=0, jobs=1): err= 0: pid=1650238: Fri Dec 13 11:13:33 2024 00:19:13.744 read: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1001msec) 00:19:13.744 slat (nsec): min=6031, max=28659, avg=7509.80, stdev=1007.35 00:19:13.744 clat (usec): min=59, max=215, avg=133.11, stdev=17.47 00:19:13.744 lat (usec): min=65, max=222, avg=140.62, stdev=17.58 00:19:13.744 clat percentiles (usec): 00:19:13.744 | 1.00th=[ 75], 5.00th=[ 85], 10.00th=[ 122], 20.00th=[ 128], 00:19:13.744 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:19:13.744 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 153], 00:19:13.744 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 202], 99.95th=[ 204], 00:19:13.744 | 99.99th=[ 217] 00:19:13.744 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:13.744 slat (nsec): min=8454, max=71013, avg=9654.16, stdev=1635.05 00:19:13.744 clat (usec): min=52, max=233, avg=126.60, stdev=17.38 00:19:13.744 lat (usec): min=61, max=242, avg=136.26, stdev=17.47 00:19:13.744 clat percentiles (usec): 00:19:13.744 | 1.00th=[ 68], 5.00th=[ 90], 10.00th=[ 115], 20.00th=[ 120], 00:19:13.744 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:19:13.744 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 153], 00:19:13.744 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 200], 00:19:13.744 | 99.99th=[ 235] 00:19:13.744 bw ( KiB/s): min=16384, max=16384, per=24.18%, avg=16384.00, stdev= 0.00, samples=1 00:19:13.744 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:13.744 lat (usec) : 100=6.03%, 250=93.97% 00:19:13.744 cpu : usr=4.10%, sys=5.70%, ctx=7112, majf=0, minf=1 00:19:13.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.744 issued rwts: total=3527,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.744 job2: (groupid=0, jobs=1): err= 0: pid=1650239: Fri Dec 13 11:13:33 2024 00:19:13.744 read: IOPS=3402, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec) 00:19:13.744 slat (nsec): min=6014, max=28149, avg=7725.28, stdev=1132.44 00:19:13.744 clat (usec): min=66, max=227, avg=136.95, stdev=14.69 00:19:13.744 lat (usec): min=74, max=233, avg=144.68, stdev=14.74 00:19:13.744 clat percentiles (usec): 00:19:13.744 | 1.00th=[ 81], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 130], 00:19:13.744 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:19:13.744 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 161], 00:19:13.744 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 196], 99.95th=[ 200], 00:19:13.744 | 99.99th=[ 229] 00:19:13.744 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:13.744 slat (nsec): min=8556, max=65546, avg=9827.64, stdev=1339.61 00:19:13.744 clat (usec): min=62, max=202, avg=127.31, stdev=16.73 00:19:13.744 lat (usec): min=72, max=212, avg=137.14, stdev=16.82 00:19:13.744 clat percentiles (usec): 00:19:13.744 | 1.00th=[ 73], 5.00th=[ 104], 10.00th=[ 115], 20.00th=[ 120], 00:19:13.744 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:19:13.745 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 157], 00:19:13.745 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 186], 00:19:13.745 | 99.99th=[ 204] 00:19:13.745 bw ( KiB/s): min=15864, max=15864, per=23.41%, avg=15864.00, stdev= 0.00, samples=1 00:19:13.745 iops : min= 3966, max= 3966, avg=3966.00, stdev= 0.00, samples=1 00:19:13.745 lat (usec) : 100=3.49%, 250=96.51% 00:19:13.745 cpu : usr=4.20%, sys=5.70%, ctx=6991, majf=0, minf=1 00:19:13.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.745 issued rwts: total=3406,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.745 job3: (groupid=0, jobs=1): err= 0: pid=1650240: Fri Dec 13 11:13:33 2024 00:19:13.745 read: IOPS=3428, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec) 00:19:13.745 slat (nsec): min=6188, max=21495, avg=7715.62, stdev=929.04 00:19:13.745 clat (usec): min=67, max=204, avg=135.98, stdev=14.43 00:19:13.745 lat (usec): min=75, max=213, avg=143.70, stdev=14.46 00:19:13.745 clat percentiles (usec): 00:19:13.745 | 1.00th=[ 80], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 129], 00:19:13.745 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:19:13.745 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:19:13.745 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 196], 00:19:13.745 | 99.99th=[ 204] 00:19:13.745 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:13.745 slat (nsec): min=8656, max=35992, avg=9838.20, stdev=981.30 00:19:13.745 clat (usec): min=63, max=190, avg=127.21, stdev=16.77 00:19:13.745 lat (usec): min=73, max=199, avg=137.05, stdev=16.83 00:19:13.745 clat percentiles (usec): 00:19:13.745 | 1.00th=[ 73], 5.00th=[ 102], 10.00th=[ 115], 20.00th=[ 120], 00:19:13.745 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:19:13.745 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 157], 00:19:13.745 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 190], 00:19:13.745 | 99.99th=[ 190] 00:19:13.745 bw ( KiB/s): min=15936, max=15936, per=23.51%, avg=15936.00, stdev= 0.00, samples=1 00:19:13.745 iops : min= 3984, max= 3984, avg=3984.00, stdev= 0.00, samples=1 00:19:13.745 lat (usec) : 100=3.95%, 250=96.05% 00:19:13.745 cpu : usr=3.40%, sys=6.40%, ctx=7016, majf=0, minf=1 00:19:13.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.745 issued rwts: total=3432,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:13.745 00:19:13.745 Run status group 0 (all jobs): 00:19:13.745 READ: bw=64.4MiB/s (67.6MB/s), 13.3MiB/s-24.0MiB/s (13.9MB/s-25.1MB/s), io=64.5MiB (67.6MB), run=1001-1001msec 00:19:13.745 WRITE: bw=66.2MiB/s (69.4MB/s), 14.0MiB/s-24.2MiB/s (14.7MB/s-25.4MB/s), io=66.2MiB (69.5MB), run=1001-1001msec 00:19:13.745 00:19:13.745 Disk stats (read/write): 00:19:13.745 nvme0n1: ios=5170/5523, merge=0/0, ticks=353/367, in_queue=720, util=87.27% 00:19:13.745 nvme0n2: ios=3064/3072, merge=0/0, ticks=395/369, in_queue=764, util=87.54% 00:19:13.745 nvme0n3: ios=2962/3072, merge=0/0, ticks=383/373, in_queue=756, util=89.35% 00:19:13.745 nvme0n4: ios=2976/3072, merge=0/0, ticks=377/382, in_queue=759, util=89.91% 00:19:13.745 11:13:33 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:13.745 [global] 00:19:13.745 thread=1 00:19:13.745 invalidate=1 00:19:13.745 rw=randwrite 00:19:13.745 time_based=1 00:19:13.745 runtime=1 00:19:13.745 ioengine=libaio 00:19:13.745 direct=1 00:19:13.745 bs=4096 00:19:13.745 iodepth=1 00:19:13.745 norandommap=0 00:19:13.745 numjobs=1 00:19:13.745 00:19:13.745 verify_dump=1 00:19:13.745 verify_backlog=512 00:19:13.745 verify_state_save=0 00:19:13.745 do_verify=1 00:19:13.745 verify=crc32c-intel 00:19:13.745 [job0] 00:19:13.745 filename=/dev/nvme0n1 00:19:13.745 [job1] 00:19:13.745 filename=/dev/nvme0n2 00:19:13.745 [job2] 00:19:13.745 filename=/dev/nvme0n3 00:19:13.745 [job3] 00:19:13.745 filename=/dev/nvme0n4 00:19:13.745 Could not set queue depth (nvme0n1) 00:19:13.745 Could not set queue depth (nvme0n2) 00:19:13.745 Could not set queue depth (nvme0n3) 00:19:13.745 Could not set queue depth (nvme0n4) 00:19:13.745 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.745 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.745 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.745 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.745 fio-3.35 00:19:13.745 Starting 4 threads 00:19:15.133 00:19:15.133 job0: (groupid=0, jobs=1): err= 0: pid=1650672: Fri Dec 13 11:13:35 2024 00:19:15.133 read: IOPS=4224, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec) 00:19:15.133 slat (nsec): min=6193, max=28365, avg=7180.35, stdev=727.73 00:19:15.133 clat (usec): min=63, max=295, avg=106.48, stdev=10.07 00:19:15.133 lat (usec): min=70, max=303, avg=113.67, stdev=10.13 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 78], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:19:15.133 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:19:15.133 | 70.00th=[ 112], 80.00th=[ 114], 90.00th=[ 117], 95.00th=[ 120], 00:19:15.133 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 155], 99.95th=[ 157], 00:19:15.133 | 99.99th=[ 297] 00:19:15.133 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:19:15.133 slat (nsec): min=8248, max=70231, avg=9465.07, stdev=1346.82 00:19:15.133 clat (usec): min=57, max=166, avg=98.75, stdev=10.43 00:19:15.133 lat (usec): min=66, max=211, avg=108.21, stdev=10.63 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 67], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 93], 00:19:15.133 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 99], 60.00th=[ 101], 00:19:15.133 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 114], 00:19:15.133 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 147], 99.95th=[ 153], 00:19:15.133 | 99.99th=[ 167] 00:19:15.133 bw ( KiB/s): min=18048, max=18048, per=25.55%, avg=18048.00, stdev= 0.00, samples=1 00:19:15.133 iops : min= 4512, max= 4512, avg=4512.00, stdev= 0.00, samples=1 00:19:15.133 lat (usec) : 100=39.05%, 250=60.94%, 500=0.01% 00:19:15.133 cpu : usr=4.80%, sys=7.00%, ctx=8837, majf=0, minf=1 00:19:15.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 issued rwts: total=4229,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.133 job1: (groupid=0, jobs=1): err= 0: pid=1650673: Fri Dec 13 11:13:35 2024 00:19:15.133 read: IOPS=4145, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec) 00:19:15.133 slat (nsec): min=6171, max=29185, avg=7083.23, stdev=866.02 00:19:15.133 clat (usec): min=65, max=159, avg=107.77, stdev= 9.17 00:19:15.133 lat (usec): min=71, max=166, avg=114.85, stdev= 9.15 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 84], 5.00th=[ 93], 10.00th=[ 97], 20.00th=[ 101], 00:19:15.133 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:19:15.133 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 119], 95.00th=[ 122], 00:19:15.133 | 99.00th=[ 129], 99.50th=[ 137], 99.90th=[ 157], 99.95th=[ 157], 00:19:15.133 | 99.99th=[ 159] 00:19:15.133 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:19:15.133 slat (nsec): min=7640, max=37897, avg=9155.45, stdev=1107.00 00:19:15.133 clat (usec): min=63, max=166, avg=100.57, stdev= 9.50 00:19:15.133 lat (usec): min=72, max=175, avg=109.72, stdev= 9.61 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 78], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 94], 00:19:15.133 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 102], 00:19:15.133 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 115], 00:19:15.133 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 149], 99.95th=[ 151], 00:19:15.133 | 99.99th=[ 167] 00:19:15.133 bw ( KiB/s): min=17784, max=17784, per=25.18%, avg=17784.00, stdev= 0.00, samples=1 00:19:15.133 iops : min= 4446, max= 4446, avg=4446.00, stdev= 0.00, samples=1 00:19:15.133 lat (usec) : 100=34.11%, 250=65.89% 00:19:15.133 cpu : usr=4.00%, sys=7.40%, ctx=8758, majf=0, minf=1 00:19:15.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 issued rwts: total=4150,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.133 job2: (groupid=0, jobs=1): err= 0: pid=1650674: Fri Dec 13 11:13:35 2024 00:19:15.133 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:19:15.133 slat (nsec): min=6402, max=29277, avg=7427.67, stdev=822.61 00:19:15.133 clat (usec): min=68, max=172, avg=113.43, stdev= 8.91 00:19:15.133 lat (usec): min=75, max=179, avg=120.86, stdev= 8.91 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 81], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 109], 00:19:15.133 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 116], 00:19:15.133 | 70.00th=[ 117], 80.00th=[ 119], 90.00th=[ 122], 95.00th=[ 126], 00:19:15.133 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 157], 99.95th=[ 163], 00:19:15.133 | 99.99th=[ 172] 00:19:15.133 write: IOPS=4204, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1001msec); 0 zone resets 00:19:15.133 slat (nsec): min=8134, max=36866, avg=9695.80, stdev=1036.47 00:19:15.133 clat (usec): min=65, max=160, avg=105.64, stdev= 8.99 00:19:15.133 lat (usec): min=75, max=183, avg=115.34, stdev= 9.05 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 74], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 101], 00:19:15.133 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:19:15.133 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 119], 00:19:15.133 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 151], 99.95th=[ 157], 00:19:15.133 | 99.99th=[ 161] 00:19:15.133 bw ( KiB/s): min=16992, max=16992, per=24.06%, avg=16992.00, stdev= 0.00, samples=1 00:19:15.133 iops : min= 4248, max= 4248, avg=4248.00, stdev= 0.00, samples=1 00:19:15.133 lat (usec) : 100=10.16%, 250=89.84% 00:19:15.133 cpu : usr=4.30%, sys=7.00%, ctx=8305, majf=0, minf=1 00:19:15.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 issued rwts: total=4096,4209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.133 job3: (groupid=0, jobs=1): err= 0: pid=1650675: Fri Dec 13 11:13:35 2024 00:19:15.133 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:19:15.133 slat (nsec): min=6282, max=23342, avg=7189.32, stdev=686.19 00:19:15.133 clat (usec): min=62, max=298, avg=113.23, stdev=10.43 00:19:15.133 lat (usec): min=69, max=305, avg=120.42, stdev=10.45 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 80], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 109], 00:19:15.133 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 115], 00:19:15.133 | 70.00th=[ 117], 80.00th=[ 119], 90.00th=[ 123], 95.00th=[ 127], 00:19:15.133 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 172], 00:19:15.133 | 99.99th=[ 297] 00:19:15.133 write: IOPS=4247, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec); 0 zone resets 00:19:15.133 slat (nsec): min=7928, max=39921, avg=9288.46, stdev=971.18 00:19:15.133 clat (usec): min=63, max=213, avg=105.87, stdev=10.87 00:19:15.133 lat (usec): min=72, max=221, avg=115.16, stdev=10.93 00:19:15.133 clat percentiles (usec): 00:19:15.133 | 1.00th=[ 73], 5.00th=[ 92], 10.00th=[ 97], 20.00th=[ 100], 00:19:15.133 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:19:15.133 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 117], 95.00th=[ 125], 00:19:15.133 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 153], 99.95th=[ 174], 00:19:15.133 | 99.99th=[ 215] 00:19:15.133 bw ( KiB/s): min=17280, max=17280, per=24.46%, avg=17280.00, stdev= 0.00, samples=1 00:19:15.133 iops : min= 4320, max= 4320, avg=4320.00, stdev= 0.00, samples=1 00:19:15.133 lat (usec) : 100=11.60%, 250=88.39%, 500=0.01% 00:19:15.133 cpu : usr=3.10%, sys=8.00%, ctx=8348, majf=0, minf=1 00:19:15.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.133 issued rwts: total=4096,4252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.133 00:19:15.133 Run status group 0 (all jobs): 00:19:15.133 READ: bw=64.7MiB/s (67.8MB/s), 16.0MiB/s-16.5MiB/s (16.8MB/s-17.3MB/s), io=64.7MiB (67.9MB), run=1001-1001msec 00:19:15.133 WRITE: bw=69.0MiB/s (72.3MB/s), 16.4MiB/s-18.0MiB/s (17.2MB/s-18.9MB/s), io=69.1MiB (72.4MB), run=1001-1001msec 00:19:15.133 00:19:15.133 Disk stats (read/write): 00:19:15.133 nvme0n1: ios=3634/3977, merge=0/0, ticks=371/373, in_queue=744, util=87.27% 00:19:15.133 nvme0n2: ios=3584/3897, merge=0/0, ticks=386/381, in_queue=767, util=87.44% 00:19:15.133 nvme0n3: ios=3569/3584, merge=0/0, ticks=398/367, in_queue=765, util=89.15% 00:19:15.133 nvme0n4: ios=3584/3613, merge=0/0, ticks=390/375, in_queue=765, util=89.71% 00:19:15.133 11:13:35 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:15.133 [global] 00:19:15.133 thread=1 00:19:15.133 invalidate=1 00:19:15.133 rw=write 00:19:15.133 time_based=1 00:19:15.133 runtime=1 00:19:15.133 ioengine=libaio 00:19:15.133 direct=1 00:19:15.133 bs=4096 00:19:15.133 iodepth=128 00:19:15.133 norandommap=0 00:19:15.133 numjobs=1 00:19:15.133 00:19:15.133 verify_dump=1 00:19:15.133 verify_backlog=512 00:19:15.133 verify_state_save=0 00:19:15.133 do_verify=1 00:19:15.133 verify=crc32c-intel 00:19:15.133 [job0] 00:19:15.133 filename=/dev/nvme0n1 00:19:15.133 [job1] 00:19:15.133 filename=/dev/nvme0n2 00:19:15.133 [job2] 00:19:15.133 filename=/dev/nvme0n3 00:19:15.133 [job3] 00:19:15.133 filename=/dev/nvme0n4 00:19:15.133 Could not set queue depth (nvme0n1) 00:19:15.133 Could not set queue depth (nvme0n2) 00:19:15.133 Could not set queue depth (nvme0n3) 00:19:15.133 Could not set queue depth (nvme0n4) 00:19:15.392 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.392 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.392 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.392 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:15.392 fio-3.35 00:19:15.392 Starting 4 threads 00:19:16.790 00:19:16.790 job0: (groupid=0, jobs=1): err= 0: pid=1651100: Fri Dec 13 11:13:37 2024 00:19:16.790 read: IOPS=5004, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1007msec) 00:19:16.790 slat (nsec): min=1228, max=11451k, avg=105779.01, stdev=596684.27 00:19:16.790 clat (usec): min=2937, max=43741, avg=12922.83, stdev=10610.29 00:19:16.790 lat (usec): min=2947, max=46461, avg=13028.61, stdev=10696.17 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 3720], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 5276], 00:19:16.790 | 30.00th=[ 5800], 40.00th=[ 6587], 50.00th=[ 8029], 60.00th=[10028], 00:19:16.790 | 70.00th=[13829], 80.00th=[19530], 90.00th=[30802], 95.00th=[39060], 00:19:16.790 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:19:16.790 | 99.99th=[43779] 00:19:16.790 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:19:16.790 slat (nsec): min=1812, max=8980.2k, avg=89016.96, stdev=495212.47 00:19:16.790 clat (usec): min=2270, max=43907, avg=12189.47, stdev=10105.83 00:19:16.790 lat (usec): min=2276, max=44942, avg=12278.48, stdev=10172.59 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 2737], 5.00th=[ 3523], 10.00th=[ 4015], 20.00th=[ 4555], 00:19:16.790 | 30.00th=[ 5407], 40.00th=[ 6783], 50.00th=[ 8094], 60.00th=[10421], 00:19:16.790 | 70.00th=[13435], 80.00th=[18482], 90.00th=[29230], 95.00th=[35914], 00:19:16.790 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:19:16.790 | 99.99th=[43779] 00:19:16.790 bw ( KiB/s): min=12288, max=28672, per=20.55%, avg=20480.00, stdev=11585.24, samples=2 00:19:16.790 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:19:16.790 lat (msec) : 4=5.85%, 10=53.39%, 20=23.55%, 50=17.21% 00:19:16.790 cpu : usr=1.99%, sys=2.68%, ctx=1063, majf=0, minf=1 00:19:16.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:16.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.790 issued rwts: total=5040,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.790 job1: (groupid=0, jobs=1): err= 0: pid=1651101: Fri Dec 13 11:13:37 2024 00:19:16.790 read: IOPS=6302, BW=24.6MiB/s (25.8MB/s)(24.8MiB/1007msec) 00:19:16.790 slat (nsec): min=1252, max=8840.2k, avg=80641.06, stdev=429106.50 00:19:16.790 clat (usec): min=2488, max=38093, avg=10699.79, stdev=6197.10 00:19:16.790 lat (usec): min=2491, max=39269, avg=10780.43, stdev=6237.02 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 3785], 5.00th=[ 4883], 10.00th=[ 5276], 20.00th=[ 5997], 00:19:16.790 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 8455], 60.00th=[ 9896], 00:19:16.790 | 70.00th=[11600], 80.00th=[15270], 90.00th=[19530], 95.00th=[25297], 00:19:16.790 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33424], 99.95th=[34866], 00:19:16.790 | 99.99th=[38011] 00:19:16.790 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:19:16.790 slat (nsec): min=1810, max=9763.7k, avg=70775.95, stdev=396047.80 00:19:16.790 clat (usec): min=2186, max=37091, avg=8931.30, stdev=4982.77 00:19:16.790 lat (usec): min=2215, max=38652, avg=9002.08, stdev=5024.72 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 2835], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5080], 00:19:16.790 | 30.00th=[ 5866], 40.00th=[ 6718], 50.00th=[ 7701], 60.00th=[ 8455], 00:19:16.790 | 70.00th=[ 9503], 80.00th=[11863], 90.00th=[15926], 95.00th=[19268], 00:19:16.790 | 99.00th=[28705], 99.50th=[30016], 99.90th=[36963], 99.95th=[36963], 00:19:16.790 | 99.99th=[36963] 00:19:16.790 bw ( KiB/s): min=20352, max=32768, per=26.65%, avg=26560.00, stdev=8779.44, samples=2 00:19:16.790 iops : min= 5088, max= 8192, avg=6640.00, stdev=2194.86, samples=2 00:19:16.790 lat (msec) : 4=2.91%, 10=64.69%, 20=25.70%, 50=6.70% 00:19:16.790 cpu : usr=2.58%, sys=3.48%, ctx=1327, majf=0, minf=1 00:19:16.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:16.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.790 issued rwts: total=6347,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.790 job2: (groupid=0, jobs=1): err= 0: pid=1651102: Fri Dec 13 11:13:37 2024 00:19:16.790 read: IOPS=6254, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1001msec) 00:19:16.790 slat (nsec): min=1269, max=10238k, avg=74507.12, stdev=454807.88 00:19:16.790 clat (usec): min=383, max=38051, avg=9960.57, stdev=5602.79 00:19:16.790 lat (usec): min=1051, max=38058, avg=10035.08, stdev=5637.20 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 2704], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 5866], 00:19:16.790 | 30.00th=[ 6587], 40.00th=[ 7308], 50.00th=[ 8160], 60.00th=[ 9372], 00:19:16.790 | 70.00th=[11076], 80.00th=[13173], 90.00th=[16712], 95.00th=[21627], 00:19:16.790 | 99.00th=[29230], 99.50th=[31065], 99.90th=[34341], 99.95th=[36963], 00:19:16.790 | 99.99th=[38011] 00:19:16.790 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:19:16.790 slat (nsec): min=1798, max=8472.3k, avg=77287.57, stdev=440271.48 00:19:16.790 clat (usec): min=2395, max=40157, avg=9641.17, stdev=6341.94 00:19:16.790 lat (usec): min=2418, max=40888, avg=9718.46, stdev=6389.76 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 3294], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5538], 00:19:16.790 | 30.00th=[ 5800], 40.00th=[ 6325], 50.00th=[ 7111], 60.00th=[ 8291], 00:19:16.790 | 70.00th=[11207], 80.00th=[12518], 90.00th=[15139], 95.00th=[27919], 00:19:16.790 | 99.00th=[32375], 99.50th=[32900], 99.90th=[36963], 99.95th=[36963], 00:19:16.790 | 99.99th=[40109] 00:19:16.790 bw ( KiB/s): min=26864, max=26864, per=26.96%, avg=26864.00, stdev= 0.00, samples=1 00:19:16.790 iops : min= 6716, max= 6716, avg=6716.00, stdev= 0.00, samples=1 00:19:16.790 lat (usec) : 500=0.01% 00:19:16.790 lat (msec) : 2=0.26%, 4=2.71%, 10=61.72%, 20=28.65%, 50=6.66% 00:19:16.790 cpu : usr=2.80%, sys=4.00%, ctx=1203, majf=0, minf=1 00:19:16.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:16.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.790 issued rwts: total=6261,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.790 job3: (groupid=0, jobs=1): err= 0: pid=1651103: Fri Dec 13 11:13:37 2024 00:19:16.790 read: IOPS=6352, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1007msec) 00:19:16.790 slat (nsec): min=1446, max=3856.9k, avg=75517.39, stdev=321443.18 00:19:16.790 clat (usec): min=656, max=14793, avg=9902.19, stdev=1547.36 00:19:16.790 lat (usec): min=1883, max=15581, avg=9977.71, stdev=1572.97 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8979], 00:19:16.790 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:19:16.790 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11600], 95.00th=[11994], 00:19:16.790 | 99.00th=[13304], 99.50th=[14091], 99.90th=[14746], 99.95th=[14746], 00:19:16.790 | 99.99th=[14746] 00:19:16.790 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:19:16.790 slat (nsec): min=1890, max=4315.9k, avg=75266.86, stdev=317253.62 00:19:16.790 clat (usec): min=5263, max=15134, avg=9589.59, stdev=1319.96 00:19:16.790 lat (usec): min=5278, max=15145, avg=9664.86, stdev=1346.22 00:19:16.790 clat percentiles (usec): 00:19:16.790 | 1.00th=[ 6194], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8455], 00:19:16.790 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10028], 00:19:16.790 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11863], 00:19:16.790 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14091], 99.95th=[14353], 00:19:16.790 | 99.99th=[15139] 00:19:16.790 bw ( KiB/s): min=24920, max=28328, per=26.72%, avg=26624.00, stdev=2409.82, samples=2 00:19:16.790 iops : min= 6230, max= 7082, avg=6656.00, stdev=602.45, samples=2 00:19:16.790 lat (usec) : 750=0.01% 00:19:16.790 lat (msec) : 2=0.07%, 4=0.24%, 10=53.41%, 20=46.27% 00:19:16.790 cpu : usr=2.29%, sys=3.08%, ctx=943, majf=0, minf=1 00:19:16.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:16.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.790 issued rwts: total=6397,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.790 00:19:16.790 Run status group 0 (all jobs): 00:19:16.790 READ: bw=93.3MiB/s (97.8MB/s), 19.5MiB/s-24.8MiB/s (20.5MB/s-26.0MB/s), io=93.9MiB (98.5MB), run=1001-1007msec 00:19:16.790 WRITE: bw=97.3MiB/s (102MB/s), 19.9MiB/s-26.0MiB/s (20.8MB/s-27.2MB/s), io=98.0MiB (103MB), run=1001-1007msec 00:19:16.790 00:19:16.790 Disk stats (read/write): 00:19:16.790 nvme0n1: ios=4409/4608, merge=0/0, ticks=15991/15024, in_queue=31015, util=87.17% 00:19:16.791 nvme0n2: ios=5651/6144, merge=0/0, ticks=20430/19908, in_queue=40338, util=86.54% 00:19:16.791 nvme0n3: ios=5065/5120, merge=0/0, ticks=21138/21541, in_queue=42679, util=86.47% 00:19:16.791 nvme0n4: ios=5587/5632, merge=0/0, ticks=54044/49633, in_queue=103677, util=89.81% 00:19:16.791 11:13:37 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:16.791 [global] 00:19:16.791 thread=1 00:19:16.791 invalidate=1 00:19:16.791 rw=randwrite 00:19:16.791 time_based=1 00:19:16.791 runtime=1 00:19:16.791 ioengine=libaio 00:19:16.791 direct=1 00:19:16.791 bs=4096 00:19:16.791 iodepth=128 00:19:16.791 norandommap=0 00:19:16.791 numjobs=1 00:19:16.791 00:19:16.791 verify_dump=1 00:19:16.791 verify_backlog=512 00:19:16.791 verify_state_save=0 00:19:16.791 do_verify=1 00:19:16.791 verify=crc32c-intel 00:19:16.791 [job0] 00:19:16.791 filename=/dev/nvme0n1 00:19:16.791 [job1] 00:19:16.791 filename=/dev/nvme0n2 00:19:16.791 [job2] 00:19:16.791 filename=/dev/nvme0n3 00:19:16.791 [job3] 00:19:16.791 filename=/dev/nvme0n4 00:19:16.791 Could not set queue depth (nvme0n1) 00:19:16.791 Could not set queue depth (nvme0n2) 00:19:16.791 Could not set queue depth (nvme0n3) 00:19:16.791 Could not set queue depth (nvme0n4) 00:19:17.052 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.052 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.052 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.052 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.052 fio-3.35 00:19:17.052 Starting 4 threads 00:19:18.422 00:19:18.422 job0: (groupid=0, jobs=1): err= 0: pid=1651543: Fri Dec 13 11:13:38 2024 00:19:18.422 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:19:18.422 slat (nsec): min=1260, max=4447.4k, avg=80814.06, stdev=367504.41 00:19:18.422 clat (usec): min=3311, max=21370, avg=10482.97, stdev=3396.45 00:19:18.422 lat (usec): min=3351, max=21997, avg=10563.79, stdev=3412.14 00:19:18.422 clat percentiles (usec): 00:19:18.422 | 1.00th=[ 4752], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7504], 00:19:18.422 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[11076], 00:19:18.423 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15008], 95.00th=[17433], 00:19:18.423 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21103], 99.95th=[21365], 00:19:18.423 | 99.99th=[21365] 00:19:18.423 write: IOPS=6339, BW=24.8MiB/s (26.0MB/s)(24.8MiB/1001msec); 0 zone resets 00:19:18.423 slat (nsec): min=1761, max=5798.6k, avg=75419.29, stdev=328873.09 00:19:18.423 clat (usec): min=293, max=18829, avg=9813.64, stdev=3140.50 00:19:18.423 lat (usec): min=944, max=18837, avg=9889.06, stdev=3151.75 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 3621], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6652], 00:19:18.423 | 30.00th=[ 7635], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10683], 00:19:18.423 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13829], 95.00th=[15533], 00:19:18.423 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[18744], 00:19:18.423 | 99.99th=[18744] 00:19:18.423 bw ( KiB/s): min=24576, max=24576, per=23.14%, avg=24576.00, stdev= 0.00, samples=1 00:19:18.423 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:19:18.423 lat (usec) : 500=0.01%, 1000=0.02% 00:19:18.423 lat (msec) : 2=0.23%, 4=0.34%, 10=51.25%, 20=47.86%, 50=0.28% 00:19:18.423 cpu : usr=3.30%, sys=4.80%, ctx=1769, majf=0, minf=1 00:19:18.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:18.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.423 issued rwts: total=6144,6346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.423 job1: (groupid=0, jobs=1): err= 0: pid=1651553: Fri Dec 13 11:13:38 2024 00:19:18.423 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:19:18.423 slat (nsec): min=1223, max=5167.4k, avg=69097.20, stdev=308532.13 00:19:18.423 clat (usec): min=2807, max=21092, avg=9209.98, stdev=3107.09 00:19:18.423 lat (usec): min=2809, max=21094, avg=9279.08, stdev=3122.06 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 6652], 00:19:18.423 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 8717], 60.00th=[ 9765], 00:19:18.423 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13566], 95.00th=[15139], 00:19:18.423 | 99.00th=[18744], 99.50th=[19792], 99.90th=[20317], 99.95th=[20579], 00:19:18.423 | 99.99th=[21103] 00:19:18.423 write: IOPS=7148, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:19:18.423 slat (nsec): min=1711, max=4495.3k, avg=66776.50, stdev=282647.47 00:19:18.423 clat (usec): min=355, max=21102, avg=8489.28, stdev=3027.61 00:19:18.423 lat (usec): min=2799, max=23551, avg=8556.06, stdev=3046.18 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 6063], 00:19:18.423 | 30.00th=[ 6587], 40.00th=[ 6915], 50.00th=[ 7439], 60.00th=[ 8455], 00:19:18.423 | 70.00th=[ 9634], 80.00th=[11469], 90.00th=[13304], 95.00th=[14222], 00:19:18.423 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17171], 99.95th=[17433], 00:19:18.423 | 99.99th=[21103] 00:19:18.423 bw ( KiB/s): min=28672, max=28672, per=27.00%, avg=28672.00, stdev= 0.00, samples=2 00:19:18.423 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:19:18.423 lat (usec) : 500=0.01% 00:19:18.423 lat (msec) : 4=0.94%, 10=66.31%, 20=32.61%, 50=0.13% 00:19:18.423 cpu : usr=3.79%, sys=5.09%, ctx=1675, majf=0, minf=1 00:19:18.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:18.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.423 issued rwts: total=7168,7170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.423 job2: (groupid=0, jobs=1): err= 0: pid=1651565: Fri Dec 13 11:13:38 2024 00:19:18.423 read: IOPS=6481, BW=25.3MiB/s (26.5MB/s)(25.4MiB/1002msec) 00:19:18.423 slat (nsec): min=1278, max=5541.5k, avg=73326.05, stdev=340775.74 00:19:18.423 clat (usec): min=506, max=17914, avg=9412.42, stdev=2722.35 00:19:18.423 lat (usec): min=1385, max=17918, avg=9485.75, stdev=2731.48 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 3785], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 7177], 00:19:18.423 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:19:18.423 | 70.00th=[10552], 80.00th=[11469], 90.00th=[13304], 95.00th=[14615], 00:19:18.423 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17957], 99.95th=[17957], 00:19:18.423 | 99.99th=[17957] 00:19:18.423 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:19:18.423 slat (nsec): min=1764, max=5183.7k, avg=75024.23, stdev=339008.62 00:19:18.423 clat (usec): min=3611, max=18667, avg=9850.25, stdev=2811.18 00:19:18.423 lat (usec): min=3660, max=18669, avg=9925.27, stdev=2825.40 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7111], 00:19:18.423 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10683], 00:19:18.423 | 70.00th=[11469], 80.00th=[12518], 90.00th=[13566], 95.00th=[14353], 00:19:18.423 | 99.00th=[16057], 99.50th=[16319], 99.90th=[18744], 99.95th=[18744], 00:19:18.423 | 99.99th=[18744] 00:19:18.423 bw ( KiB/s): min=24576, max=24576, per=23.14%, avg=24576.00, stdev= 0.00, samples=1 00:19:18.423 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:19:18.423 lat (usec) : 750=0.01% 00:19:18.423 lat (msec) : 2=0.13%, 4=0.67%, 10=55.48%, 20=43.71% 00:19:18.423 cpu : usr=2.90%, sys=5.19%, ctx=1467, majf=0, minf=1 00:19:18.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:18.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.423 issued rwts: total=6494,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.423 job3: (groupid=0, jobs=1): err= 0: pid=1651572: Fri Dec 13 11:13:38 2024 00:19:18.423 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:19:18.423 slat (nsec): min=1263, max=4633.2k, avg=77734.44, stdev=321976.16 00:19:18.423 clat (usec): min=4376, max=19283, avg=9843.90, stdev=2932.45 00:19:18.423 lat (usec): min=4501, max=20734, avg=9921.64, stdev=2948.31 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 5342], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7373], 00:19:18.423 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[10290], 00:19:18.423 | 70.00th=[11207], 80.00th=[12387], 90.00th=[14091], 95.00th=[15139], 00:19:18.423 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:19:18.423 | 99.99th=[19268] 00:19:18.423 write: IOPS=6440, BW=25.2MiB/s (26.4MB/s)(25.2MiB/1003msec); 0 zone resets 00:19:18.423 slat (nsec): min=1749, max=5056.9k, avg=77331.84, stdev=336510.15 00:19:18.423 clat (usec): min=719, max=19729, avg=10289.87, stdev=3312.76 00:19:18.423 lat (usec): min=4107, max=19731, avg=10367.20, stdev=3326.08 00:19:18.423 clat percentiles (usec): 00:19:18.423 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 7308], 00:19:18.423 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10552], 00:19:18.423 | 70.00th=[11994], 80.00th=[13566], 90.00th=[15270], 95.00th=[16712], 00:19:18.423 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19268], 99.95th=[19792], 00:19:18.423 | 99.99th=[19792] 00:19:18.423 bw ( KiB/s): min=22312, max=28352, per=23.85%, avg=25332.00, stdev=4270.92, samples=2 00:19:18.423 iops : min= 5578, max= 7088, avg=6333.00, stdev=1067.73, samples=2 00:19:18.423 lat (usec) : 750=0.01% 00:19:18.423 lat (msec) : 4=0.01%, 10=56.62%, 20=43.36% 00:19:18.423 cpu : usr=2.79%, sys=5.39%, ctx=1581, majf=0, minf=2 00:19:18.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:18.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:18.423 issued rwts: total=6144,6460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:18.423 00:19:18.423 Run status group 0 (all jobs): 00:19:18.423 READ: bw=101MiB/s (106MB/s), 23.9MiB/s-27.9MiB/s (25.1MB/s-29.3MB/s), io=101MiB (106MB), run=1001-1003msec 00:19:18.423 WRITE: bw=104MiB/s (109MB/s), 24.8MiB/s-27.9MiB/s (26.0MB/s-29.3MB/s), io=104MiB (109MB), run=1001-1003msec 00:19:18.423 00:19:18.423 Disk stats (read/write): 00:19:18.423 nvme0n1: ios=5362/5632, merge=0/0, ticks=15249/15088, in_queue=30337, util=84.17% 00:19:18.423 nvme0n2: ios=6144/6494, merge=0/0, ticks=14907/14510, in_queue=29417, util=85.74% 00:19:18.423 nvme0n3: ios=5120/5230, merge=0/0, ticks=15379/16002, in_queue=31381, util=88.16% 00:19:18.423 nvme0n4: ios=5324/5632, merge=0/0, ticks=13962/15081, in_queue=29043, util=89.31% 00:19:18.423 11:13:38 -- target/fio.sh@55 -- # sync 00:19:18.423 11:13:38 -- target/fio.sh@59 -- # fio_pid=1651802 00:19:18.423 11:13:38 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:18.423 11:13:38 -- target/fio.sh@61 -- # sleep 3 00:19:18.423 [global] 00:19:18.423 thread=1 00:19:18.423 invalidate=1 00:19:18.423 rw=read 00:19:18.423 time_based=1 00:19:18.423 runtime=10 00:19:18.423 ioengine=libaio 00:19:18.423 direct=1 00:19:18.423 bs=4096 00:19:18.423 iodepth=1 00:19:18.423 norandommap=1 00:19:18.423 numjobs=1 00:19:18.423 00:19:18.423 [job0] 00:19:18.423 filename=/dev/nvme0n1 00:19:18.423 [job1] 00:19:18.423 filename=/dev/nvme0n2 00:19:18.423 [job2] 00:19:18.423 filename=/dev/nvme0n3 00:19:18.423 [job3] 00:19:18.423 filename=/dev/nvme0n4 00:19:18.423 Could not set queue depth (nvme0n1) 00:19:18.423 Could not set queue depth (nvme0n2) 00:19:18.423 Could not set queue depth (nvme0n3) 00:19:18.423 Could not set queue depth (nvme0n4) 00:19:18.423 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.423 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.423 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.423 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.423 fio-3.35 00:19:18.423 Starting 4 threads 00:19:21.698 11:13:41 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:21.698 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=84774912, buflen=4096 00:19:21.698 fio: pid=1652011, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:21.698 11:13:41 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:21.698 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=120778752, buflen=4096 00:19:21.698 fio: pid=1652006, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:21.698 11:13:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:21.698 11:13:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:21.698 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=33034240, buflen=4096 00:19:21.698 fio: pid=1651977, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:21.698 11:13:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:21.698 11:13:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:21.955 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=67096576, buflen=4096 00:19:21.955 fio: pid=1651988, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:21.955 11:13:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:21.956 11:13:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:21.956 00:19:21.956 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1651977: Fri Dec 13 11:13:42 2024 00:19:21.956 read: IOPS=7966, BW=31.1MiB/s (32.6MB/s)(95.5MiB/3069msec) 00:19:21.956 slat (usec): min=4, max=23599, avg= 9.68, stdev=216.67 00:19:21.956 clat (usec): min=43, max=231, avg=113.68, stdev=23.42 00:19:21.956 lat (usec): min=53, max=23763, avg=123.35, stdev=217.94 00:19:21.956 clat percentiles (usec): 00:19:21.956 | 1.00th=[ 59], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 101], 00:19:21.956 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:19:21.956 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 147], 00:19:21.956 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 190], 00:19:21.956 | 99.99th=[ 206] 00:19:21.956 bw ( KiB/s): min=30160, max=31816, per=23.31%, avg=30913.60, stdev=595.57, samples=5 00:19:21.956 iops : min= 7540, max= 7954, avg=7728.40, stdev=148.89, samples=5 00:19:21.956 lat (usec) : 50=0.03%, 100=19.82%, 250=80.14% 00:19:21.956 cpu : usr=1.53%, sys=7.20%, ctx=24455, majf=0, minf=1 00:19:21.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 issued rwts: total=24450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.956 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1651988: Fri Dec 13 11:13:42 2024 00:19:21.956 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(128MiB/3240msec) 00:19:21.956 slat (usec): min=2, max=17098, avg= 8.99, stdev=163.70 00:19:21.956 clat (usec): min=42, max=20042, avg=88.62, stdev=112.38 00:19:21.956 lat (usec): min=44, max=20049, avg=97.61, stdev=198.58 00:19:21.956 clat percentiles (usec): 00:19:21.956 | 1.00th=[ 49], 5.00th=[ 54], 10.00th=[ 68], 20.00th=[ 76], 00:19:21.956 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 86], 00:19:21.956 | 70.00th=[ 91], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 130], 00:19:21.956 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 190], 00:19:21.956 | 99.99th=[ 281] 00:19:21.956 bw ( KiB/s): min=30656, max=44152, per=29.87%, avg=39602.83, stdev=5850.97, samples=6 00:19:21.956 iops : min= 7664, max=11038, avg=9900.67, stdev=1462.73, samples=6 00:19:21.956 lat (usec) : 50=2.00%, 100=75.94%, 250=22.04%, 500=0.01%, 750=0.01% 00:19:21.956 lat (msec) : 50=0.01% 00:19:21.956 cpu : usr=2.44%, sys=8.24%, ctx=32774, majf=0, minf=2 00:19:21.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 issued rwts: total=32766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.956 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1652006: Fri Dec 13 11:13:42 2024 00:19:21.956 read: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(115MiB/2885msec) 00:19:21.956 slat (usec): min=4, max=8849, avg= 7.78, stdev=68.75 00:19:21.956 clat (usec): min=54, max=195, avg=88.12, stdev=15.60 00:19:21.956 lat (usec): min=58, max=8933, avg=95.90, stdev=70.55 00:19:21.956 clat percentiles (usec): 00:19:21.956 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 78], 00:19:21.956 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:19:21.956 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 119], 00:19:21.956 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 153], 99.95th=[ 155], 00:19:21.956 | 99.99th=[ 169] 00:19:21.956 bw ( KiB/s): min=34256, max=44816, per=30.94%, avg=41019.20, stdev=5048.62, samples=5 00:19:21.956 iops : min= 8564, max=11204, avg=10254.80, stdev=1262.15, samples=5 00:19:21.956 lat (usec) : 100=79.80%, 250=20.20% 00:19:21.956 cpu : usr=2.22%, sys=9.02%, ctx=29490, majf=0, minf=1 00:19:21.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 issued rwts: total=29488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.956 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1652011: Fri Dec 13 11:13:42 2024 00:19:21.956 read: IOPS=7632, BW=29.8MiB/s (31.3MB/s)(80.8MiB/2712msec) 00:19:21.956 slat (nsec): min=6152, max=34168, avg=7247.19, stdev=744.40 00:19:21.956 clat (usec): min=67, max=215, avg=121.34, stdev=13.39 00:19:21.956 lat (usec): min=74, max=223, avg=128.59, stdev=13.40 00:19:21.956 clat percentiles (usec): 00:19:21.956 | 1.00th=[ 85], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 114], 00:19:21.956 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:19:21.956 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 147], 00:19:21.956 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 188], 00:19:21.956 | 99.99th=[ 196] 00:19:21.956 bw ( KiB/s): min=30072, max=31816, per=23.30%, avg=30896.00, stdev=624.03, samples=5 00:19:21.956 iops : min= 7518, max= 7954, avg=7724.00, stdev=156.01, samples=5 00:19:21.956 lat (usec) : 100=4.27%, 250=95.72% 00:19:21.956 cpu : usr=2.32%, sys=6.16%, ctx=20698, majf=0, minf=2 00:19:21.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.956 issued rwts: total=20698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.956 00:19:21.956 Run status group 0 (all jobs): 00:19:21.956 READ: bw=129MiB/s (136MB/s), 29.8MiB/s-39.9MiB/s (31.3MB/s-41.9MB/s), io=420MiB (440MB), run=2712-3240msec 00:19:21.956 00:19:21.956 Disk stats (read/write): 00:19:21.956 nvme0n1: ios=22481/0, merge=0/0, ticks=2568/0, in_queue=2568, util=94.49% 00:19:21.956 nvme0n2: ios=30828/0, merge=0/0, ticks=2647/0, in_queue=2647, util=94.22% 00:19:21.956 nvme0n3: ios=29487/0, merge=0/0, ticks=2483/0, in_queue=2483, util=96.06% 00:19:21.956 nvme0n4: ios=20154/0, merge=0/0, ticks=2358/0, in_queue=2358, util=96.49% 00:19:22.213 11:13:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.213 11:13:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:22.213 11:13:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.213 11:13:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:22.471 11:13:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.471 11:13:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:22.728 11:13:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.728 11:13:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:22.985 11:13:43 -- target/fio.sh@69 -- # fio_status=0 00:19:22.985 11:13:43 -- target/fio.sh@70 -- # wait 1651802 00:19:22.985 11:13:43 -- target/fio.sh@70 -- # fio_status=4 00:19:22.985 11:13:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:23.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:23.914 11:13:44 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:23.914 11:13:44 -- common/autotest_common.sh@1208 -- # local i=0 00:19:23.914 11:13:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:23.914 11:13:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:23.914 11:13:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:23.914 11:13:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:23.914 11:13:44 -- common/autotest_common.sh@1220 -- # return 0 00:19:23.914 11:13:44 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:23.914 11:13:44 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:23.914 nvmf hotplug test: fio failed as expected 00:19:23.914 11:13:44 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.914 11:13:44 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:23.914 11:13:44 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:23.914 11:13:44 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:23.914 11:13:44 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:23.914 11:13:44 -- target/fio.sh@91 -- # nvmftestfini 00:19:23.914 11:13:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:23.914 11:13:44 -- nvmf/common.sh@116 -- # sync 00:19:23.914 11:13:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:23.914 11:13:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:23.914 11:13:44 -- nvmf/common.sh@119 -- # set +e 00:19:23.914 11:13:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:23.914 11:13:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:23.914 rmmod nvme_rdma 00:19:23.914 rmmod nvme_fabrics 00:19:23.914 11:13:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:23.914 11:13:44 -- nvmf/common.sh@123 -- # set -e 00:19:23.914 11:13:44 -- nvmf/common.sh@124 -- # return 0 00:19:23.914 11:13:44 -- nvmf/common.sh@477 -- # '[' -n 1648691 ']' 00:19:23.914 11:13:44 -- nvmf/common.sh@478 -- # killprocess 1648691 00:19:23.914 11:13:44 -- common/autotest_common.sh@936 -- # '[' -z 1648691 ']' 00:19:23.914 11:13:44 -- common/autotest_common.sh@940 -- # kill -0 1648691 00:19:23.914 11:13:44 -- common/autotest_common.sh@941 -- # uname 00:19:23.914 11:13:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:24.172 11:13:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648691 00:19:24.172 11:13:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:24.172 11:13:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:24.172 11:13:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648691' 00:19:24.172 killing process with pid 1648691 00:19:24.172 11:13:44 -- common/autotest_common.sh@955 -- # kill 1648691 00:19:24.172 11:13:44 -- common/autotest_common.sh@960 -- # wait 1648691 00:19:24.429 11:13:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:24.429 11:13:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:24.429 00:19:24.429 real 0m25.090s 00:19:24.429 user 2m2.408s 00:19:24.429 sys 0m8.629s 00:19:24.429 11:13:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:24.429 11:13:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.429 ************************************ 00:19:24.429 END TEST nvmf_fio_target 00:19:24.429 ************************************ 00:19:24.429 11:13:44 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:24.429 11:13:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:24.429 11:13:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:24.429 11:13:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.429 ************************************ 00:19:24.429 START TEST nvmf_bdevio 00:19:24.429 ************************************ 00:19:24.429 11:13:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:24.429 * Looking for test storage... 00:19:24.429 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:24.429 11:13:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:24.429 11:13:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:24.429 11:13:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:24.429 11:13:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:24.429 11:13:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:24.429 11:13:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:24.429 11:13:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:24.429 11:13:44 -- scripts/common.sh@335 -- # IFS=.-: 00:19:24.429 11:13:44 -- scripts/common.sh@335 -- # read -ra ver1 00:19:24.429 11:13:44 -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.429 11:13:44 -- scripts/common.sh@336 -- # read -ra ver2 00:19:24.429 11:13:44 -- scripts/common.sh@337 -- # local 'op=<' 00:19:24.429 11:13:44 -- scripts/common.sh@339 -- # ver1_l=2 00:19:24.429 11:13:44 -- scripts/common.sh@340 -- # ver2_l=1 00:19:24.429 11:13:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:24.429 11:13:44 -- scripts/common.sh@343 -- # case "$op" in 00:19:24.429 11:13:44 -- scripts/common.sh@344 -- # : 1 00:19:24.429 11:13:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:24.429 11:13:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.429 11:13:44 -- scripts/common.sh@364 -- # decimal 1 00:19:24.429 11:13:44 -- scripts/common.sh@352 -- # local d=1 00:19:24.429 11:13:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.429 11:13:44 -- scripts/common.sh@354 -- # echo 1 00:19:24.429 11:13:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:24.429 11:13:44 -- scripts/common.sh@365 -- # decimal 2 00:19:24.429 11:13:44 -- scripts/common.sh@352 -- # local d=2 00:19:24.429 11:13:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.429 11:13:44 -- scripts/common.sh@354 -- # echo 2 00:19:24.429 11:13:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:24.429 11:13:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:24.429 11:13:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:24.429 11:13:44 -- scripts/common.sh@367 -- # return 0 00:19:24.429 11:13:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.429 11:13:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:24.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.429 --rc genhtml_branch_coverage=1 00:19:24.429 --rc genhtml_function_coverage=1 00:19:24.429 --rc genhtml_legend=1 00:19:24.429 --rc geninfo_all_blocks=1 00:19:24.429 --rc geninfo_unexecuted_blocks=1 00:19:24.429 00:19:24.429 ' 00:19:24.429 11:13:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:24.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.429 --rc genhtml_branch_coverage=1 00:19:24.429 --rc genhtml_function_coverage=1 00:19:24.429 --rc genhtml_legend=1 00:19:24.429 --rc geninfo_all_blocks=1 00:19:24.429 --rc geninfo_unexecuted_blocks=1 00:19:24.429 00:19:24.429 ' 00:19:24.429 11:13:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:24.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.429 --rc genhtml_branch_coverage=1 00:19:24.429 --rc genhtml_function_coverage=1 00:19:24.429 --rc genhtml_legend=1 00:19:24.429 --rc geninfo_all_blocks=1 00:19:24.430 --rc geninfo_unexecuted_blocks=1 00:19:24.430 00:19:24.430 ' 00:19:24.430 11:13:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:24.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.430 --rc genhtml_branch_coverage=1 00:19:24.430 --rc genhtml_function_coverage=1 00:19:24.430 --rc genhtml_legend=1 00:19:24.430 --rc geninfo_all_blocks=1 00:19:24.430 --rc geninfo_unexecuted_blocks=1 00:19:24.430 00:19:24.430 ' 00:19:24.430 11:13:44 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.688 11:13:44 -- nvmf/common.sh@7 -- # uname -s 00:19:24.688 11:13:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.688 11:13:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.688 11:13:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.688 11:13:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.688 11:13:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.688 11:13:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.688 11:13:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.688 11:13:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.688 11:13:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.688 11:13:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.688 11:13:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:24.688 11:13:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:24.688 11:13:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.688 11:13:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.688 11:13:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.688 11:13:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:24.688 11:13:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.688 11:13:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.688 11:13:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.688 11:13:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.688 11:13:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.688 11:13:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.688 11:13:45 -- paths/export.sh@5 -- # export PATH 00:19:24.688 11:13:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.688 11:13:45 -- nvmf/common.sh@46 -- # : 0 00:19:24.688 11:13:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:24.689 11:13:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:24.689 11:13:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:24.689 11:13:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.689 11:13:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.689 11:13:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:24.689 11:13:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:24.689 11:13:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:24.689 11:13:45 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:24.689 11:13:45 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:24.689 11:13:45 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:24.689 11:13:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:24.689 11:13:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.689 11:13:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:24.689 11:13:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:24.689 11:13:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:24.689 11:13:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.689 11:13:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.689 11:13:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.689 11:13:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:24.689 11:13:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:24.689 11:13:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:24.689 11:13:45 -- common/autotest_common.sh@10 -- # set +x 00:19:29.948 11:13:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:29.948 11:13:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:29.948 11:13:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:29.948 11:13:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:29.948 11:13:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:29.948 11:13:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:29.948 11:13:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:29.948 11:13:50 -- nvmf/common.sh@294 -- # net_devs=() 00:19:29.948 11:13:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:29.948 11:13:50 -- nvmf/common.sh@295 -- # e810=() 00:19:29.948 11:13:50 -- nvmf/common.sh@295 -- # local -ga e810 00:19:29.948 11:13:50 -- nvmf/common.sh@296 -- # x722=() 00:19:29.948 11:13:50 -- nvmf/common.sh@296 -- # local -ga x722 00:19:29.948 11:13:50 -- nvmf/common.sh@297 -- # mlx=() 00:19:29.948 11:13:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:29.948 11:13:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.948 11:13:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:29.948 11:13:50 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:29.948 11:13:50 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:29.948 11:13:50 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:29.948 11:13:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:29.948 11:13:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:29.948 11:13:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:29.948 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:29.948 11:13:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:29.948 11:13:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:29.948 11:13:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:29.948 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:29.948 11:13:50 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:29.948 11:13:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:29.948 11:13:50 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:29.948 11:13:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.948 11:13:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:29.948 11:13:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.948 11:13:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:29.948 Found net devices under 0000:18:00.0: mlx_0_0 00:19:29.948 11:13:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.948 11:13:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:29.948 11:13:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.948 11:13:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:29.948 11:13:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.948 11:13:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:29.948 Found net devices under 0000:18:00.1: mlx_0_1 00:19:29.948 11:13:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.948 11:13:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:29.948 11:13:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:29.948 11:13:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:29.948 11:13:50 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:29.949 11:13:50 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:29.949 11:13:50 -- nvmf/common.sh@57 -- # uname 00:19:29.949 11:13:50 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:29.949 11:13:50 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:29.949 11:13:50 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:29.949 11:13:50 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:29.949 11:13:50 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:29.949 11:13:50 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:29.949 11:13:50 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:29.949 11:13:50 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:29.949 11:13:50 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:29.949 11:13:50 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:29.949 11:13:50 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:29.949 11:13:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:29.949 11:13:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:29.949 11:13:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:29.949 11:13:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:29.949 11:13:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:29.949 11:13:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@104 -- # continue 2 00:19:29.949 11:13:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@104 -- # continue 2 00:19:29.949 11:13:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:29.949 11:13:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:29.949 11:13:50 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:29.949 11:13:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:29.949 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:29.949 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:29.949 altname enp24s0f0np0 00:19:29.949 altname ens785f0np0 00:19:29.949 inet 192.168.100.8/24 scope global mlx_0_0 00:19:29.949 valid_lft forever preferred_lft forever 00:19:29.949 11:13:50 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:29.949 11:13:50 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:29.949 11:13:50 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:29.949 11:13:50 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:29.949 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:29.949 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:29.949 altname enp24s0f1np1 00:19:29.949 altname ens785f1np1 00:19:29.949 inet 192.168.100.9/24 scope global mlx_0_1 00:19:29.949 valid_lft forever preferred_lft forever 00:19:29.949 11:13:50 -- nvmf/common.sh@410 -- # return 0 00:19:29.949 11:13:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:29.949 11:13:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:29.949 11:13:50 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:29.949 11:13:50 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:29.949 11:13:50 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:29.949 11:13:50 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:29.949 11:13:50 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:29.949 11:13:50 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:29.949 11:13:50 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:29.949 11:13:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@104 -- # continue 2 00:19:29.949 11:13:50 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:29.949 11:13:50 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:29.949 11:13:50 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@104 -- # continue 2 00:19:29.949 11:13:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:29.949 11:13:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:29.949 11:13:50 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:29.949 11:13:50 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:29.949 11:13:50 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:29.949 11:13:50 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:29.949 192.168.100.9' 00:19:29.949 11:13:50 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:29.949 192.168.100.9' 00:19:29.949 11:13:50 -- nvmf/common.sh@445 -- # head -n 1 00:19:29.949 11:13:50 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:29.949 11:13:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:29.949 192.168.100.9' 00:19:29.949 11:13:50 -- nvmf/common.sh@446 -- # tail -n +2 00:19:29.949 11:13:50 -- nvmf/common.sh@446 -- # head -n 1 00:19:29.949 11:13:50 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:29.949 11:13:50 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:29.949 11:13:50 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:29.949 11:13:50 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:29.949 11:13:50 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:29.949 11:13:50 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:29.949 11:13:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:29.949 11:13:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:29.949 11:13:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.949 11:13:50 -- common/autotest_common.sh@10 -- # set +x 00:19:29.949 11:13:50 -- nvmf/common.sh@469 -- # nvmfpid=1656316 00:19:29.949 11:13:50 -- nvmf/common.sh@470 -- # waitforlisten 1656316 00:19:29.949 11:13:50 -- common/autotest_common.sh@829 -- # '[' -z 1656316 ']' 00:19:29.949 11:13:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.949 11:13:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.949 11:13:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.949 11:13:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.949 11:13:50 -- common/autotest_common.sh@10 -- # set +x 00:19:29.949 11:13:50 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:29.949 [2024-12-13 11:13:50.362584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:29.949 [2024-12-13 11:13:50.362629] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.949 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.949 [2024-12-13 11:13:50.413184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.949 [2024-12-13 11:13:50.478663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:29.949 [2024-12-13 11:13:50.478765] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.949 [2024-12-13 11:13:50.478772] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.949 [2024-12-13 11:13:50.478778] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.949 [2024-12-13 11:13:50.478884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:29.949 [2024-12-13 11:13:50.479232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:29.949 [2024-12-13 11:13:50.479271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.949 [2024-12-13 11:13:50.479280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:30.881 11:13:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.881 11:13:51 -- common/autotest_common.sh@862 -- # return 0 00:19:30.881 11:13:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:30.881 11:13:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.881 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.881 11:13:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.881 11:13:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:30.881 11:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.881 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.881 [2024-12-13 11:13:51.208875] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xea6240/0xeaa730) succeed. 00:19:30.881 [2024-12-13 11:13:51.217033] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xea7830/0xeebdd0) succeed. 00:19:30.881 11:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.881 11:13:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:30.881 11:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.881 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.881 Malloc0 00:19:30.881 11:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.881 11:13:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.881 11:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.881 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.881 11:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.881 11:13:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.881 11:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.881 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.881 11:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.881 11:13:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:30.881 11:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.881 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.881 [2024-12-13 11:13:51.366255] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:30.881 11:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.881 11:13:51 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:30.881 11:13:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:30.881 11:13:51 -- nvmf/common.sh@520 -- # config=() 00:19:30.881 11:13:51 -- nvmf/common.sh@520 -- # local subsystem config 00:19:30.881 11:13:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:30.881 11:13:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:30.881 { 00:19:30.881 "params": { 00:19:30.881 "name": "Nvme$subsystem", 00:19:30.881 "trtype": "$TEST_TRANSPORT", 00:19:30.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.881 "adrfam": "ipv4", 00:19:30.881 "trsvcid": "$NVMF_PORT", 00:19:30.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.882 "hdgst": ${hdgst:-false}, 00:19:30.882 "ddgst": ${ddgst:-false} 00:19:30.882 }, 00:19:30.882 "method": "bdev_nvme_attach_controller" 00:19:30.882 } 00:19:30.882 EOF 00:19:30.882 )") 00:19:30.882 11:13:51 -- nvmf/common.sh@542 -- # cat 00:19:30.882 11:13:51 -- nvmf/common.sh@544 -- # jq . 00:19:30.882 11:13:51 -- nvmf/common.sh@545 -- # IFS=, 00:19:30.882 11:13:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:30.882 "params": { 00:19:30.882 "name": "Nvme1", 00:19:30.882 "trtype": "rdma", 00:19:30.882 "traddr": "192.168.100.8", 00:19:30.882 "adrfam": "ipv4", 00:19:30.882 "trsvcid": "4420", 00:19:30.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.882 "hdgst": false, 00:19:30.882 "ddgst": false 00:19:30.882 }, 00:19:30.882 "method": "bdev_nvme_attach_controller" 00:19:30.882 }' 00:19:30.882 [2024-12-13 11:13:51.412420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:30.882 [2024-12-13 11:13:51.412464] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656598 ] 00:19:30.882 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.139 [2024-12-13 11:13:51.463753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.139 [2024-12-13 11:13:51.532357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.139 [2024-12-13 11:13:51.532450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.139 [2024-12-13 11:13:51.532452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.139 [2024-12-13 11:13:51.688597] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:31.139 [2024-12-13 11:13:51.688626] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:31.139 I/O targets: 00:19:31.139 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:31.139 00:19:31.139 00:19:31.139 CUnit - A unit testing framework for C - Version 2.1-3 00:19:31.139 http://cunit.sourceforge.net/ 00:19:31.139 00:19:31.139 00:19:31.139 Suite: bdevio tests on: Nvme1n1 00:19:31.139 Test: blockdev write read block ...passed 00:19:31.139 Test: blockdev write zeroes read block ...passed 00:19:31.139 Test: blockdev write zeroes read no split ...passed 00:19:31.139 Test: blockdev write zeroes read split ...passed 00:19:31.397 Test: blockdev write zeroes read split partial ...passed 00:19:31.397 Test: blockdev reset ...[2024-12-13 11:13:51.718574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.397 [2024-12-13 11:13:51.740015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:31.397 [2024-12-13 11:13:51.767869] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:31.397 passed 00:19:31.397 Test: blockdev write read 8 blocks ...passed 00:19:31.397 Test: blockdev write read size > 128k ...passed 00:19:31.397 Test: blockdev write read invalid size ...passed 00:19:31.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:31.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:31.397 Test: blockdev write read max offset ...passed 00:19:31.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:31.397 Test: blockdev writev readv 8 blocks ...passed 00:19:31.397 Test: blockdev writev readv 30 x 1block ...passed 00:19:31.397 Test: blockdev writev readv block ...passed 00:19:31.397 Test: blockdev writev readv size > 128k ...passed 00:19:31.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:31.397 Test: blockdev comparev and writev ...[2024-12-13 11:13:51.770483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.770506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.770515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.770676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.770684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.770692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.770698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.770841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.770849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.770856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.770862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.771024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.771032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.771039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:31.397 [2024-12-13 11:13:51.771045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:31.397 passed 00:19:31.397 Test: blockdev nvme passthru rw ...passed 00:19:31.397 Test: blockdev nvme passthru vendor specific ...[2024-12-13 11:13:51.771278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:31.397 [2024-12-13 11:13:51.771291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.771326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:31.397 [2024-12-13 11:13:51.771333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.771374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:31.397 [2024-12-13 11:13:51.771381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:31.397 [2024-12-13 11:13:51.771421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:31.397 [2024-12-13 11:13:51.771428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:31.397 passed 00:19:31.397 Test: blockdev nvme admin passthru ...passed 00:19:31.397 Test: blockdev copy ...passed 00:19:31.397 00:19:31.397 Run Summary: Type Total Ran Passed Failed Inactive 00:19:31.397 suites 1 1 n/a 0 0 00:19:31.397 tests 23 23 23 0 0 00:19:31.397 asserts 152 152 152 0 n/a 00:19:31.397 00:19:31.397 Elapsed time = 0.168 seconds 00:19:31.655 11:13:51 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.655 11:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.655 11:13:51 -- common/autotest_common.sh@10 -- # set +x 00:19:31.655 11:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.655 11:13:51 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:31.655 11:13:51 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:31.655 11:13:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:31.655 11:13:51 -- nvmf/common.sh@116 -- # sync 00:19:31.655 11:13:51 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:31.655 11:13:51 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:31.655 11:13:51 -- nvmf/common.sh@119 -- # set +e 00:19:31.655 11:13:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:31.655 11:13:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:31.655 rmmod nvme_rdma 00:19:31.655 rmmod nvme_fabrics 00:19:31.655 11:13:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:31.655 11:13:52 -- nvmf/common.sh@123 -- # set -e 00:19:31.655 11:13:52 -- nvmf/common.sh@124 -- # return 0 00:19:31.655 11:13:52 -- nvmf/common.sh@477 -- # '[' -n 1656316 ']' 00:19:31.655 11:13:52 -- nvmf/common.sh@478 -- # killprocess 1656316 00:19:31.655 11:13:52 -- common/autotest_common.sh@936 -- # '[' -z 1656316 ']' 00:19:31.655 11:13:52 -- common/autotest_common.sh@940 -- # kill -0 1656316 00:19:31.655 11:13:52 -- common/autotest_common.sh@941 -- # uname 00:19:31.655 11:13:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:31.655 11:13:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1656316 00:19:31.655 11:13:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:31.655 11:13:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:31.655 11:13:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1656316' 00:19:31.655 killing process with pid 1656316 00:19:31.655 11:13:52 -- common/autotest_common.sh@955 -- # kill 1656316 00:19:31.655 11:13:52 -- common/autotest_common.sh@960 -- # wait 1656316 00:19:31.912 11:13:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.912 11:13:52 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:31.912 00:19:31.912 real 0m7.524s 00:19:31.912 user 0m9.881s 00:19:31.912 sys 0m4.549s 00:19:31.912 11:13:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:31.912 11:13:52 -- common/autotest_common.sh@10 -- # set +x 00:19:31.912 ************************************ 00:19:31.912 END TEST nvmf_bdevio 00:19:31.912 ************************************ 00:19:31.912 11:13:52 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:19:31.912 11:13:52 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:19:31.912 11:13:52 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:19:31.912 11:13:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:31.912 11:13:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:31.912 11:13:52 -- common/autotest_common.sh@10 -- # set +x 00:19:31.912 ************************************ 00:19:31.912 START TEST nvmf_fuzz 00:19:31.912 ************************************ 00:19:31.912 11:13:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:19:31.912 * Looking for test storage... 00:19:31.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:31.912 11:13:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:31.912 11:13:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:31.912 11:13:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:32.170 11:13:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:32.170 11:13:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:32.170 11:13:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:32.170 11:13:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:32.170 11:13:52 -- scripts/common.sh@335 -- # IFS=.-: 00:19:32.170 11:13:52 -- scripts/common.sh@335 -- # read -ra ver1 00:19:32.170 11:13:52 -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.170 11:13:52 -- scripts/common.sh@336 -- # read -ra ver2 00:19:32.170 11:13:52 -- scripts/common.sh@337 -- # local 'op=<' 00:19:32.170 11:13:52 -- scripts/common.sh@339 -- # ver1_l=2 00:19:32.170 11:13:52 -- scripts/common.sh@340 -- # ver2_l=1 00:19:32.170 11:13:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:32.170 11:13:52 -- scripts/common.sh@343 -- # case "$op" in 00:19:32.170 11:13:52 -- scripts/common.sh@344 -- # : 1 00:19:32.170 11:13:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:32.170 11:13:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.170 11:13:52 -- scripts/common.sh@364 -- # decimal 1 00:19:32.170 11:13:52 -- scripts/common.sh@352 -- # local d=1 00:19:32.170 11:13:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.170 11:13:52 -- scripts/common.sh@354 -- # echo 1 00:19:32.170 11:13:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:32.170 11:13:52 -- scripts/common.sh@365 -- # decimal 2 00:19:32.170 11:13:52 -- scripts/common.sh@352 -- # local d=2 00:19:32.170 11:13:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.170 11:13:52 -- scripts/common.sh@354 -- # echo 2 00:19:32.170 11:13:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:32.170 11:13:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:32.170 11:13:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:32.170 11:13:52 -- scripts/common.sh@367 -- # return 0 00:19:32.170 11:13:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.170 11:13:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:32.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.170 --rc genhtml_branch_coverage=1 00:19:32.170 --rc genhtml_function_coverage=1 00:19:32.170 --rc genhtml_legend=1 00:19:32.170 --rc geninfo_all_blocks=1 00:19:32.170 --rc geninfo_unexecuted_blocks=1 00:19:32.170 00:19:32.170 ' 00:19:32.170 11:13:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:32.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.170 --rc genhtml_branch_coverage=1 00:19:32.170 --rc genhtml_function_coverage=1 00:19:32.170 --rc genhtml_legend=1 00:19:32.170 --rc geninfo_all_blocks=1 00:19:32.170 --rc geninfo_unexecuted_blocks=1 00:19:32.170 00:19:32.170 ' 00:19:32.170 11:13:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:32.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.170 --rc genhtml_branch_coverage=1 00:19:32.170 --rc genhtml_function_coverage=1 00:19:32.170 --rc genhtml_legend=1 00:19:32.170 --rc geninfo_all_blocks=1 00:19:32.170 --rc geninfo_unexecuted_blocks=1 00:19:32.170 00:19:32.170 ' 00:19:32.170 11:13:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:32.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.170 --rc genhtml_branch_coverage=1 00:19:32.170 --rc genhtml_function_coverage=1 00:19:32.170 --rc genhtml_legend=1 00:19:32.170 --rc geninfo_all_blocks=1 00:19:32.170 --rc geninfo_unexecuted_blocks=1 00:19:32.170 00:19:32.170 ' 00:19:32.170 11:13:52 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.170 11:13:52 -- nvmf/common.sh@7 -- # uname -s 00:19:32.170 11:13:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.170 11:13:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.170 11:13:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.170 11:13:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.170 11:13:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.170 11:13:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.170 11:13:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.170 11:13:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.170 11:13:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.170 11:13:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.170 11:13:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:32.170 11:13:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:32.170 11:13:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.170 11:13:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.170 11:13:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.170 11:13:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:32.170 11:13:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.170 11:13:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.170 11:13:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.170 11:13:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.171 11:13:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.171 11:13:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.171 11:13:52 -- paths/export.sh@5 -- # export PATH 00:19:32.171 11:13:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.171 11:13:52 -- nvmf/common.sh@46 -- # : 0 00:19:32.171 11:13:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:32.171 11:13:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:32.171 11:13:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:32.171 11:13:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.171 11:13:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.171 11:13:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:32.171 11:13:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:32.171 11:13:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:32.171 11:13:52 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:32.171 11:13:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:32.171 11:13:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.171 11:13:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:32.171 11:13:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:32.171 11:13:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:32.171 11:13:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.171 11:13:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.171 11:13:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.171 11:13:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:32.171 11:13:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:32.171 11:13:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:32.171 11:13:52 -- common/autotest_common.sh@10 -- # set +x 00:19:37.428 11:13:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.428 11:13:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:37.428 11:13:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:37.428 11:13:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:37.428 11:13:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:37.428 11:13:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:37.428 11:13:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:37.428 11:13:57 -- nvmf/common.sh@294 -- # net_devs=() 00:19:37.428 11:13:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:37.428 11:13:57 -- nvmf/common.sh@295 -- # e810=() 00:19:37.428 11:13:57 -- nvmf/common.sh@295 -- # local -ga e810 00:19:37.428 11:13:57 -- nvmf/common.sh@296 -- # x722=() 00:19:37.428 11:13:57 -- nvmf/common.sh@296 -- # local -ga x722 00:19:37.428 11:13:57 -- nvmf/common.sh@297 -- # mlx=() 00:19:37.428 11:13:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:37.428 11:13:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.428 11:13:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:37.428 11:13:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:37.428 11:13:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:37.428 11:13:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:37.428 11:13:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:37.428 11:13:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.428 11:13:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:37.428 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:37.428 11:13:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.428 11:13:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.428 11:13:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:37.428 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:37.428 11:13:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.428 11:13:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:37.428 11:13:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.428 11:13:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.428 11:13:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.428 11:13:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.428 11:13:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:37.428 Found net devices under 0000:18:00.0: mlx_0_0 00:19:37.428 11:13:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.428 11:13:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.428 11:13:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.428 11:13:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.428 11:13:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.428 11:13:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:37.428 Found net devices under 0000:18:00.1: mlx_0_1 00:19:37.428 11:13:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.428 11:13:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:37.428 11:13:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:37.428 11:13:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:37.428 11:13:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:37.428 11:13:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:37.429 11:13:57 -- nvmf/common.sh@57 -- # uname 00:19:37.429 11:13:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:37.429 11:13:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:37.429 11:13:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:37.429 11:13:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:37.429 11:13:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:37.429 11:13:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:37.429 11:13:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:37.429 11:13:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:37.429 11:13:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:37.429 11:13:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:37.429 11:13:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:37.429 11:13:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.429 11:13:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:37.429 11:13:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:37.429 11:13:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:37.429 11:13:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:37.429 11:13:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@104 -- # continue 2 00:19:37.429 11:13:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@104 -- # continue 2 00:19:37.429 11:13:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:37.429 11:13:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:37.429 11:13:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:37.429 11:13:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:37.429 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.429 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:37.429 altname enp24s0f0np0 00:19:37.429 altname ens785f0np0 00:19:37.429 inet 192.168.100.8/24 scope global mlx_0_0 00:19:37.429 valid_lft forever preferred_lft forever 00:19:37.429 11:13:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:37.429 11:13:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:37.429 11:13:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:37.429 11:13:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:37.429 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.429 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:37.429 altname enp24s0f1np1 00:19:37.429 altname ens785f1np1 00:19:37.429 inet 192.168.100.9/24 scope global mlx_0_1 00:19:37.429 valid_lft forever preferred_lft forever 00:19:37.429 11:13:57 -- nvmf/common.sh@410 -- # return 0 00:19:37.429 11:13:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:37.429 11:13:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:37.429 11:13:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:37.429 11:13:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:37.429 11:13:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.429 11:13:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:37.429 11:13:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:37.429 11:13:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:37.429 11:13:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:37.429 11:13:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@104 -- # continue 2 00:19:37.429 11:13:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.429 11:13:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:37.429 11:13:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@104 -- # continue 2 00:19:37.429 11:13:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:37.429 11:13:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:37.429 11:13:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:37.429 11:13:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:37.429 11:13:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:37.429 11:13:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:37.429 192.168.100.9' 00:19:37.429 11:13:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:37.429 192.168.100.9' 00:19:37.429 11:13:57 -- nvmf/common.sh@445 -- # head -n 1 00:19:37.429 11:13:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:37.429 11:13:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:37.429 192.168.100.9' 00:19:37.429 11:13:57 -- nvmf/common.sh@446 -- # tail -n +2 00:19:37.429 11:13:57 -- nvmf/common.sh@446 -- # head -n 1 00:19:37.429 11:13:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:37.429 11:13:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:37.429 11:13:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:37.429 11:13:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:37.429 11:13:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:37.429 11:13:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:37.429 11:13:57 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1659877 00:19:37.429 11:13:57 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:37.429 11:13:57 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:37.429 11:13:57 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1659877 00:19:37.429 11:13:57 -- common/autotest_common.sh@829 -- # '[' -z 1659877 ']' 00:19:37.429 11:13:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.429 11:13:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.429 11:13:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.429 11:13:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.429 11:13:57 -- common/autotest_common.sh@10 -- # set +x 00:19:38.360 11:13:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.360 11:13:58 -- common/autotest_common.sh@862 -- # return 0 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:38.360 11:13:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.360 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:38.360 11:13:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:19:38.360 11:13:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.360 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:38.360 Malloc0 00:19:38.360 11:13:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.360 11:13:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.360 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:38.360 11:13:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.360 11:13:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.360 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:38.360 11:13:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:38.360 11:13:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.360 11:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:38.360 11:13:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:19:38.360 11:13:58 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:20:10.400 Fuzzing completed. Shutting down the fuzz application 00:20:10.400 00:20:10.400 Dumping successful admin opcodes: 00:20:10.400 8, 9, 10, 24, 00:20:10.400 Dumping successful io opcodes: 00:20:10.400 0, 9, 00:20:10.400 NS: 0x200003af1f00 I/O qp, Total commands completed: 1304302, total successful commands: 7685, random_seed: 2248765888 00:20:10.400 NS: 0x200003af1f00 admin qp, Total commands completed: 165536, total successful commands: 1338, random_seed: 1734954560 00:20:10.400 11:14:29 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:10.400 Fuzzing completed. Shutting down the fuzz application 00:20:10.400 00:20:10.400 Dumping successful admin opcodes: 00:20:10.400 24, 00:20:10.400 Dumping successful io opcodes: 00:20:10.400 00:20:10.400 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3671634518 00:20:10.400 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3671712990 00:20:10.400 11:14:30 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.400 11:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.400 11:14:30 -- common/autotest_common.sh@10 -- # set +x 00:20:10.400 11:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.400 11:14:30 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:10.400 11:14:30 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:10.400 11:14:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.400 11:14:30 -- nvmf/common.sh@116 -- # sync 00:20:10.400 11:14:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:10.400 11:14:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:10.400 11:14:30 -- nvmf/common.sh@119 -- # set +e 00:20:10.400 11:14:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.400 11:14:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:10.400 rmmod nvme_rdma 00:20:10.400 rmmod nvme_fabrics 00:20:10.400 11:14:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.401 11:14:30 -- nvmf/common.sh@123 -- # set -e 00:20:10.401 11:14:30 -- nvmf/common.sh@124 -- # return 0 00:20:10.401 11:14:30 -- nvmf/common.sh@477 -- # '[' -n 1659877 ']' 00:20:10.401 11:14:30 -- nvmf/common.sh@478 -- # killprocess 1659877 00:20:10.401 11:14:30 -- common/autotest_common.sh@936 -- # '[' -z 1659877 ']' 00:20:10.401 11:14:30 -- common/autotest_common.sh@940 -- # kill -0 1659877 00:20:10.401 11:14:30 -- common/autotest_common.sh@941 -- # uname 00:20:10.401 11:14:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.401 11:14:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1659877 00:20:10.401 11:14:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:10.401 11:14:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:10.401 11:14:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1659877' 00:20:10.401 killing process with pid 1659877 00:20:10.401 11:14:30 -- common/autotest_common.sh@955 -- # kill 1659877 00:20:10.401 11:14:30 -- common/autotest_common.sh@960 -- # wait 1659877 00:20:10.401 11:14:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.401 11:14:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:10.401 11:14:30 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:10.401 00:20:10.401 real 0m38.488s 00:20:10.401 user 0m50.733s 00:20:10.401 sys 0m19.012s 00:20:10.401 11:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:10.401 11:14:30 -- common/autotest_common.sh@10 -- # set +x 00:20:10.401 ************************************ 00:20:10.401 END TEST nvmf_fuzz 00:20:10.401 ************************************ 00:20:10.401 11:14:30 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:10.401 11:14:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:10.401 11:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.401 11:14:30 -- common/autotest_common.sh@10 -- # set +x 00:20:10.401 ************************************ 00:20:10.401 START TEST nvmf_multiconnection 00:20:10.401 ************************************ 00:20:10.401 11:14:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:10.658 * Looking for test storage... 00:20:10.658 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:10.658 11:14:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:10.658 11:14:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:10.658 11:14:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:10.658 11:14:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:10.658 11:14:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:10.658 11:14:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:10.658 11:14:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:10.658 11:14:31 -- scripts/common.sh@335 -- # IFS=.-: 00:20:10.658 11:14:31 -- scripts/common.sh@335 -- # read -ra ver1 00:20:10.658 11:14:31 -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.658 11:14:31 -- scripts/common.sh@336 -- # read -ra ver2 00:20:10.658 11:14:31 -- scripts/common.sh@337 -- # local 'op=<' 00:20:10.658 11:14:31 -- scripts/common.sh@339 -- # ver1_l=2 00:20:10.658 11:14:31 -- scripts/common.sh@340 -- # ver2_l=1 00:20:10.658 11:14:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:10.658 11:14:31 -- scripts/common.sh@343 -- # case "$op" in 00:20:10.658 11:14:31 -- scripts/common.sh@344 -- # : 1 00:20:10.658 11:14:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:10.658 11:14:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.658 11:14:31 -- scripts/common.sh@364 -- # decimal 1 00:20:10.658 11:14:31 -- scripts/common.sh@352 -- # local d=1 00:20:10.658 11:14:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.658 11:14:31 -- scripts/common.sh@354 -- # echo 1 00:20:10.658 11:14:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:10.658 11:14:31 -- scripts/common.sh@365 -- # decimal 2 00:20:10.658 11:14:31 -- scripts/common.sh@352 -- # local d=2 00:20:10.658 11:14:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.658 11:14:31 -- scripts/common.sh@354 -- # echo 2 00:20:10.658 11:14:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:10.658 11:14:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:10.658 11:14:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:10.658 11:14:31 -- scripts/common.sh@367 -- # return 0 00:20:10.658 11:14:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.658 11:14:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.658 --rc genhtml_branch_coverage=1 00:20:10.658 --rc genhtml_function_coverage=1 00:20:10.658 --rc genhtml_legend=1 00:20:10.658 --rc geninfo_all_blocks=1 00:20:10.658 --rc geninfo_unexecuted_blocks=1 00:20:10.658 00:20:10.658 ' 00:20:10.658 11:14:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.658 --rc genhtml_branch_coverage=1 00:20:10.658 --rc genhtml_function_coverage=1 00:20:10.658 --rc genhtml_legend=1 00:20:10.658 --rc geninfo_all_blocks=1 00:20:10.658 --rc geninfo_unexecuted_blocks=1 00:20:10.658 00:20:10.658 ' 00:20:10.658 11:14:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.658 --rc genhtml_branch_coverage=1 00:20:10.658 --rc genhtml_function_coverage=1 00:20:10.658 --rc genhtml_legend=1 00:20:10.658 --rc geninfo_all_blocks=1 00:20:10.658 --rc geninfo_unexecuted_blocks=1 00:20:10.658 00:20:10.658 ' 00:20:10.658 11:14:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.658 --rc genhtml_branch_coverage=1 00:20:10.658 --rc genhtml_function_coverage=1 00:20:10.658 --rc genhtml_legend=1 00:20:10.658 --rc geninfo_all_blocks=1 00:20:10.658 --rc geninfo_unexecuted_blocks=1 00:20:10.658 00:20:10.658 ' 00:20:10.658 11:14:31 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.658 11:14:31 -- nvmf/common.sh@7 -- # uname -s 00:20:10.658 11:14:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.658 11:14:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.658 11:14:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.658 11:14:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.658 11:14:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.659 11:14:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.659 11:14:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.659 11:14:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.659 11:14:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.659 11:14:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.659 11:14:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:10.659 11:14:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:10.659 11:14:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.659 11:14:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.659 11:14:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.659 11:14:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:10.659 11:14:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.659 11:14:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.659 11:14:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.659 11:14:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.659 11:14:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.659 11:14:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.659 11:14:31 -- paths/export.sh@5 -- # export PATH 00:20:10.659 11:14:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.659 11:14:31 -- nvmf/common.sh@46 -- # : 0 00:20:10.659 11:14:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:10.659 11:14:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:10.659 11:14:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:10.659 11:14:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.659 11:14:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.659 11:14:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:10.659 11:14:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:10.659 11:14:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:10.659 11:14:31 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.659 11:14:31 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.659 11:14:31 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:10.659 11:14:31 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:10.659 11:14:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:10.659 11:14:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.659 11:14:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:10.659 11:14:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:10.659 11:14:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:10.659 11:14:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.659 11:14:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.659 11:14:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.659 11:14:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:10.659 11:14:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:10.659 11:14:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:10.659 11:14:31 -- common/autotest_common.sh@10 -- # set +x 00:20:15.916 11:14:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:15.916 11:14:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:15.916 11:14:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:15.916 11:14:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:15.916 11:14:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:15.916 11:14:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:15.916 11:14:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:15.916 11:14:36 -- nvmf/common.sh@294 -- # net_devs=() 00:20:15.916 11:14:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:15.916 11:14:36 -- nvmf/common.sh@295 -- # e810=() 00:20:15.916 11:14:36 -- nvmf/common.sh@295 -- # local -ga e810 00:20:15.916 11:14:36 -- nvmf/common.sh@296 -- # x722=() 00:20:15.916 11:14:36 -- nvmf/common.sh@296 -- # local -ga x722 00:20:15.916 11:14:36 -- nvmf/common.sh@297 -- # mlx=() 00:20:15.916 11:14:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:15.916 11:14:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.916 11:14:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:15.916 11:14:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:15.916 11:14:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:15.916 11:14:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:15.916 11:14:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:15.916 11:14:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:15.916 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:15.916 11:14:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.916 11:14:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:15.916 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:15.916 11:14:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.916 11:14:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:15.916 11:14:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.916 11:14:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.916 11:14:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.916 11:14:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:15.916 Found net devices under 0000:18:00.0: mlx_0_0 00:20:15.916 11:14:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.916 11:14:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.916 11:14:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.916 11:14:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.916 11:14:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:15.916 Found net devices under 0000:18:00.1: mlx_0_1 00:20:15.916 11:14:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.916 11:14:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:15.916 11:14:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:15.916 11:14:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:15.916 11:14:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:15.916 11:14:36 -- nvmf/common.sh@57 -- # uname 00:20:15.916 11:14:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:15.916 11:14:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:15.916 11:14:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:15.916 11:14:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:15.916 11:14:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:15.916 11:14:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:15.916 11:14:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:15.916 11:14:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:15.916 11:14:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:15.916 11:14:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:15.916 11:14:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:15.916 11:14:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.916 11:14:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:15.916 11:14:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:15.916 11:14:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.916 11:14:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:15.916 11:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:15.916 11:14:36 -- nvmf/common.sh@104 -- # continue 2 00:20:15.916 11:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.916 11:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:15.916 11:14:36 -- nvmf/common.sh@104 -- # continue 2 00:20:15.916 11:14:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:15.916 11:14:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:15.916 11:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:15.916 11:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:15.916 11:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.916 11:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.916 11:14:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:15.916 11:14:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:15.916 11:14:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:15.916 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.916 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:15.916 altname enp24s0f0np0 00:20:15.916 altname ens785f0np0 00:20:15.916 inet 192.168.100.8/24 scope global mlx_0_0 00:20:15.916 valid_lft forever preferred_lft forever 00:20:15.916 11:14:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:15.916 11:14:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:15.916 11:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:15.916 11:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:15.916 11:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.916 11:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.916 11:14:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:15.917 11:14:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:15.917 11:14:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:15.917 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.917 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:15.917 altname enp24s0f1np1 00:20:15.917 altname ens785f1np1 00:20:15.917 inet 192.168.100.9/24 scope global mlx_0_1 00:20:15.917 valid_lft forever preferred_lft forever 00:20:15.917 11:14:36 -- nvmf/common.sh@410 -- # return 0 00:20:15.917 11:14:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.917 11:14:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:15.917 11:14:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:15.917 11:14:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:15.917 11:14:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:15.917 11:14:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.917 11:14:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:15.917 11:14:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:15.917 11:14:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:16.174 11:14:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:16.174 11:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:16.174 11:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:16.174 11:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:16.174 11:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:16.174 11:14:36 -- nvmf/common.sh@104 -- # continue 2 00:20:16.174 11:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:16.174 11:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:16.174 11:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:16.174 11:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:16.174 11:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:16.174 11:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:16.174 11:14:36 -- nvmf/common.sh@104 -- # continue 2 00:20:16.174 11:14:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:16.174 11:14:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:16.174 11:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:16.174 11:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:16.174 11:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:16.174 11:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:16.174 11:14:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:16.174 11:14:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:16.174 11:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:16.174 11:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:16.174 11:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:16.174 11:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:16.174 11:14:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:16.174 192.168.100.9' 00:20:16.174 11:14:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:16.174 192.168.100.9' 00:20:16.174 11:14:36 -- nvmf/common.sh@445 -- # head -n 1 00:20:16.174 11:14:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:16.174 11:14:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:16.174 192.168.100.9' 00:20:16.174 11:14:36 -- nvmf/common.sh@446 -- # tail -n +2 00:20:16.174 11:14:36 -- nvmf/common.sh@446 -- # head -n 1 00:20:16.174 11:14:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:16.174 11:14:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:16.174 11:14:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:16.174 11:14:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:16.174 11:14:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:16.174 11:14:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:16.174 11:14:36 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:16.174 11:14:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:16.174 11:14:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.174 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.174 11:14:36 -- nvmf/common.sh@469 -- # nvmfpid=1668985 00:20:16.174 11:14:36 -- nvmf/common.sh@470 -- # waitforlisten 1668985 00:20:16.174 11:14:36 -- common/autotest_common.sh@829 -- # '[' -z 1668985 ']' 00:20:16.174 11:14:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.174 11:14:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.174 11:14:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.174 11:14:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.174 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.174 11:14:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:16.174 [2024-12-13 11:14:36.605249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:16.174 [2024-12-13 11:14:36.605311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.174 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.174 [2024-12-13 11:14:36.655163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.174 [2024-12-13 11:14:36.728288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:16.174 [2024-12-13 11:14:36.728388] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.174 [2024-12-13 11:14:36.728396] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.174 [2024-12-13 11:14:36.728402] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.174 [2024-12-13 11:14:36.728438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.174 [2024-12-13 11:14:36.728534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.174 [2024-12-13 11:14:36.728622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.174 [2024-12-13 11:14:36.728624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.104 11:14:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.105 11:14:37 -- common/autotest_common.sh@862 -- # return 0 00:20:17.105 11:14:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:17.105 11:14:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 11:14:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.105 11:14:37 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 [2024-12-13 11:14:37.456351] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6cb960/0x6cfe50) succeed. 00:20:17.105 [2024-12-13 11:14:37.464570] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ccf50/0x7114f0) succeed. 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:17.105 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.105 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 Malloc1 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 [2024-12-13 11:14:37.623199] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.105 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 Malloc2 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.105 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.105 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.105 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:17.105 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.105 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 Malloc3 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.362 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 Malloc4 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.362 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.362 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 Malloc5 00:20:17.362 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.362 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:17.362 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.363 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 Malloc6 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.363 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 Malloc7 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.363 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 Malloc8 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.363 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:20:17.363 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.363 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.620 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:17.620 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 Malloc9 00:20:17.620 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:17.620 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:17.620 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:20:17.620 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.620 11:14:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:17.620 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 Malloc10 00:20:17.620 11:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:17.620 11:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:37 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:17.620 11:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:20:17.620 11:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.620 11:14:38 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:17.620 11:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 Malloc11 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:17.620 11:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:17.620 11:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:20:17.620 11:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.620 11:14:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 11:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.620 11:14:38 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:17.620 11:14:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.620 11:14:38 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:18.550 11:14:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:18.550 11:14:39 -- common/autotest_common.sh@1187 -- # local i=0 00:20:18.550 11:14:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:18.550 11:14:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:18.550 11:14:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:21.068 11:14:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:21.068 11:14:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:21.068 11:14:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:20:21.068 11:14:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:21.068 11:14:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:21.068 11:14:41 -- common/autotest_common.sh@1197 -- # return 0 00:20:21.068 11:14:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:21.068 11:14:41 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:21.632 11:14:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:21.632 11:14:42 -- common/autotest_common.sh@1187 -- # local i=0 00:20:21.632 11:14:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:21.632 11:14:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:21.632 11:14:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:23.527 11:14:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:23.527 11:14:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:23.527 11:14:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:20:23.527 11:14:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:23.527 11:14:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:23.527 11:14:44 -- common/autotest_common.sh@1197 -- # return 0 00:20:23.527 11:14:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:23.527 11:14:44 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:24.895 11:14:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:24.895 11:14:45 -- common/autotest_common.sh@1187 -- # local i=0 00:20:24.895 11:14:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:24.895 11:14:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:24.895 11:14:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:26.788 11:14:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:26.788 11:14:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:26.788 11:14:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:20:26.788 11:14:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:26.788 11:14:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:26.788 11:14:47 -- common/autotest_common.sh@1197 -- # return 0 00:20:26.788 11:14:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:26.788 11:14:47 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:27.717 11:14:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:27.717 11:14:48 -- common/autotest_common.sh@1187 -- # local i=0 00:20:27.717 11:14:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:27.717 11:14:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:27.717 11:14:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:29.610 11:14:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:29.610 11:14:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:29.610 11:14:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:20:29.610 11:14:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:29.610 11:14:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:29.610 11:14:50 -- common/autotest_common.sh@1197 -- # return 0 00:20:29.610 11:14:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:29.610 11:14:50 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:30.539 11:14:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:30.539 11:14:51 -- common/autotest_common.sh@1187 -- # local i=0 00:20:30.539 11:14:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:30.539 11:14:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:30.539 11:14:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:33.059 11:14:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:33.059 11:14:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:33.059 11:14:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:20:33.059 11:14:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:33.059 11:14:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:33.059 11:14:53 -- common/autotest_common.sh@1197 -- # return 0 00:20:33.059 11:14:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:33.059 11:14:53 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:20:33.630 11:14:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:33.630 11:14:54 -- common/autotest_common.sh@1187 -- # local i=0 00:20:33.630 11:14:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:33.630 11:14:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:33.630 11:14:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:35.522 11:14:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:35.522 11:14:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:35.522 11:14:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:20:35.522 11:14:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:35.522 11:14:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:35.522 11:14:56 -- common/autotest_common.sh@1197 -- # return 0 00:20:35.522 11:14:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.522 11:14:56 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:20:36.891 11:14:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:36.891 11:14:57 -- common/autotest_common.sh@1187 -- # local i=0 00:20:36.891 11:14:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.891 11:14:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:36.891 11:14:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:38.785 11:14:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:38.785 11:14:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:38.785 11:14:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:20:38.785 11:14:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:38.785 11:14:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.785 11:14:59 -- common/autotest_common.sh@1197 -- # return 0 00:20:38.785 11:14:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:38.785 11:14:59 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:20:39.715 11:15:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:20:39.715 11:15:00 -- common/autotest_common.sh@1187 -- # local i=0 00:20:39.715 11:15:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:39.715 11:15:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:39.715 11:15:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:41.608 11:15:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:41.608 11:15:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:41.608 11:15:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:20:41.608 11:15:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:41.608 11:15:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:41.608 11:15:02 -- common/autotest_common.sh@1197 -- # return 0 00:20:41.608 11:15:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:41.608 11:15:02 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:20:42.538 11:15:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:20:42.538 11:15:03 -- common/autotest_common.sh@1187 -- # local i=0 00:20:42.538 11:15:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:42.538 11:15:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:42.538 11:15:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:45.058 11:15:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:45.058 11:15:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:45.058 11:15:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:20:45.058 11:15:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:45.058 11:15:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:45.058 11:15:05 -- common/autotest_common.sh@1197 -- # return 0 00:20:45.058 11:15:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.058 11:15:05 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:20:45.621 11:15:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:20:45.621 11:15:06 -- common/autotest_common.sh@1187 -- # local i=0 00:20:45.621 11:15:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.621 11:15:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:45.621 11:15:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:47.514 11:15:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:47.514 11:15:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:47.514 11:15:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:20:47.514 11:15:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:47.514 11:15:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.514 11:15:08 -- common/autotest_common.sh@1197 -- # return 0 00:20:47.514 11:15:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.514 11:15:08 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:20:48.883 11:15:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:20:48.883 11:15:09 -- common/autotest_common.sh@1187 -- # local i=0 00:20:48.883 11:15:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:48.883 11:15:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:48.883 11:15:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:50.776 11:15:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:50.776 11:15:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:50.776 11:15:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:20:50.776 11:15:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:50.776 11:15:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:50.776 11:15:11 -- common/autotest_common.sh@1197 -- # return 0 00:20:50.776 11:15:11 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:20:50.776 [global] 00:20:50.776 thread=1 00:20:50.776 invalidate=1 00:20:50.776 rw=read 00:20:50.776 time_based=1 00:20:50.776 runtime=10 00:20:50.776 ioengine=libaio 00:20:50.776 direct=1 00:20:50.776 bs=262144 00:20:50.776 iodepth=64 00:20:50.776 norandommap=1 00:20:50.776 numjobs=1 00:20:50.776 00:20:50.776 [job0] 00:20:50.776 filename=/dev/nvme0n1 00:20:50.776 [job1] 00:20:50.776 filename=/dev/nvme10n1 00:20:50.776 [job2] 00:20:50.776 filename=/dev/nvme1n1 00:20:50.776 [job3] 00:20:50.776 filename=/dev/nvme2n1 00:20:50.776 [job4] 00:20:50.776 filename=/dev/nvme3n1 00:20:50.776 [job5] 00:20:50.776 filename=/dev/nvme4n1 00:20:50.776 [job6] 00:20:50.776 filename=/dev/nvme5n1 00:20:50.776 [job7] 00:20:50.776 filename=/dev/nvme6n1 00:20:50.776 [job8] 00:20:50.776 filename=/dev/nvme7n1 00:20:50.776 [job9] 00:20:50.776 filename=/dev/nvme8n1 00:20:50.776 [job10] 00:20:50.776 filename=/dev/nvme9n1 00:20:50.776 Could not set queue depth (nvme0n1) 00:20:50.776 Could not set queue depth (nvme10n1) 00:20:50.776 Could not set queue depth (nvme1n1) 00:20:50.776 Could not set queue depth (nvme2n1) 00:20:50.776 Could not set queue depth (nvme3n1) 00:20:50.776 Could not set queue depth (nvme4n1) 00:20:50.776 Could not set queue depth (nvme5n1) 00:20:50.776 Could not set queue depth (nvme6n1) 00:20:50.776 Could not set queue depth (nvme7n1) 00:20:50.776 Could not set queue depth (nvme8n1) 00:20:50.776 Could not set queue depth (nvme9n1) 00:20:51.053 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.053 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.054 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.054 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.054 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:51.054 fio-3.35 00:20:51.054 Starting 11 threads 00:21:03.343 00:21:03.343 job0: (groupid=0, jobs=1): err= 0: pid=1676418: Fri Dec 13 11:15:21 2024 00:21:03.343 read: IOPS=1087, BW=272MiB/s (285MB/s)(2726MiB/10024msec) 00:21:03.343 slat (usec): min=8, max=52762, avg=837.62, stdev=2575.27 00:21:03.343 clat (usec): min=1032, max=131288, avg=57956.28, stdev=19005.80 00:21:03.343 lat (usec): min=1075, max=156016, avg=58793.90, stdev=19419.71 00:21:03.343 clat percentiles (msec): 00:21:03.343 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 44], 00:21:03.343 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 62], 00:21:03.343 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 82], 95.00th=[ 91], 00:21:03.343 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 126], 00:21:03.343 | 99.99th=[ 132] 00:21:03.343 bw ( KiB/s): min=172544, max=371200, per=6.64%, avg=277504.00, stdev=59675.20, samples=20 00:21:03.343 iops : min= 674, max= 1450, avg=1084.00, stdev=233.11, samples=20 00:21:03.343 lat (msec) : 2=0.43%, 4=0.26%, 10=0.27%, 20=1.92%, 50=36.66% 00:21:03.343 lat (msec) : 100=57.74%, 250=2.73% 00:21:03.343 cpu : usr=0.23%, sys=3.13%, ctx=3143, majf=0, minf=4097 00:21:03.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:03.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.343 issued rwts: total=10903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.343 job1: (groupid=0, jobs=1): err= 0: pid=1676419: Fri Dec 13 11:15:21 2024 00:21:03.343 read: IOPS=1218, BW=305MiB/s (319MB/s)(3063MiB/10059msec) 00:21:03.343 slat (usec): min=7, max=47366, avg=694.14, stdev=2814.79 00:21:03.343 clat (msec): min=10, max=139, avg=51.80, stdev=25.16 00:21:03.343 lat (msec): min=10, max=139, avg=52.49, stdev=25.64 00:21:03.343 clat percentiles (msec): 00:21:03.343 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 25], 00:21:03.343 | 30.00th=[ 32], 40.00th=[ 46], 50.00th=[ 58], 60.00th=[ 63], 00:21:03.343 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 81], 95.00th=[ 93], 00:21:03.343 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 120], 00:21:03.343 | 99.99th=[ 132] 00:21:03.343 bw ( KiB/s): min=164864, max=722944, per=7.47%, avg=312038.40, stdev=138982.70, samples=20 00:21:03.343 iops : min= 644, max= 2824, avg=1218.90, stdev=542.90, samples=20 00:21:03.343 lat (msec) : 20=15.51%, 50=27.14%, 100=54.57%, 250=2.78% 00:21:03.343 cpu : usr=0.16%, sys=3.27%, ctx=3960, majf=0, minf=4097 00:21:03.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:03.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.343 issued rwts: total=12253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.343 job2: (groupid=0, jobs=1): err= 0: pid=1676420: Fri Dec 13 11:15:21 2024 00:21:03.343 read: IOPS=1118, BW=280MiB/s (293MB/s)(2804MiB/10027msec) 00:21:03.343 slat (usec): min=7, max=72155, avg=765.70, stdev=2959.96 00:21:03.343 clat (usec): min=1202, max=130519, avg=56394.25, stdev=19581.73 00:21:03.343 lat (usec): min=1242, max=184723, avg=57159.96, stdev=20045.07 00:21:03.343 clat percentiles (msec): 00:21:03.343 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 37], 00:21:03.343 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 60], 60.00th=[ 62], 00:21:03.343 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 78], 95.00th=[ 90], 00:21:03.343 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 121], 00:21:03.343 | 99.99th=[ 124] 00:21:03.343 bw ( KiB/s): min=192512, max=513024, per=6.83%, avg=285542.40, stdev=71128.56, samples=20 00:21:03.343 iops : min= 752, max= 2004, avg=1115.40, stdev=277.85, samples=20 00:21:03.343 lat (msec) : 2=0.07%, 4=0.21%, 10=2.01%, 20=0.69%, 50=33.50% 00:21:03.343 lat (msec) : 100=61.95%, 250=1.57% 00:21:03.343 cpu : usr=0.24%, sys=3.23%, ctx=3461, majf=0, minf=3659 00:21:03.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:03.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.343 issued rwts: total=11217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.343 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job3: (groupid=0, jobs=1): err= 0: pid=1676421: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=1080, BW=270MiB/s (283MB/s)(2717MiB/10057msec) 00:21:03.344 slat (usec): min=7, max=76787, avg=792.39, stdev=2710.83 00:21:03.344 clat (msec): min=11, max=169, avg=58.39, stdev=18.95 00:21:03.344 lat (msec): min=11, max=169, avg=59.18, stdev=19.34 00:21:03.344 clat percentiles (msec): 00:21:03.344 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 44], 00:21:03.344 | 30.00th=[ 45], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 63], 00:21:03.344 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 79], 95.00th=[ 92], 00:21:03.344 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 148], 99.95th=[ 150], 00:21:03.344 | 99.99th=[ 169] 00:21:03.344 bw ( KiB/s): min=180224, max=378880, per=6.62%, avg=276526.45, stdev=50018.27, samples=20 00:21:03.344 iops : min= 704, max= 1480, avg=1080.15, stdev=195.37, samples=20 00:21:03.344 lat (msec) : 20=1.18%, 50=37.81%, 100=58.18%, 250=2.83% 00:21:03.344 cpu : usr=0.20%, sys=3.10%, ctx=3500, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=10866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job4: (groupid=0, jobs=1): err= 0: pid=1676423: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=1593, BW=398MiB/s (418MB/s)(3994MiB/10025msec) 00:21:03.344 slat (usec): min=7, max=22087, avg=601.17, stdev=1613.13 00:21:03.344 clat (usec): min=11317, max=89150, avg=39518.79, stdev=16109.71 00:21:03.344 lat (usec): min=11336, max=89853, avg=40119.96, stdev=16380.32 00:21:03.344 clat percentiles (usec): 00:21:03.344 | 1.00th=[12911], 5.00th=[14222], 10.00th=[17171], 20.00th=[29230], 00:21:03.344 | 30.00th=[30540], 40.00th=[31589], 50.00th=[33817], 60.00th=[40633], 00:21:03.344 | 70.00th=[46924], 80.00th=[56361], 90.00th=[62129], 95.00th=[68682], 00:21:03.344 | 99.00th=[78119], 99.50th=[79168], 99.90th=[84411], 99.95th=[87557], 00:21:03.344 | 99.99th=[88605] 00:21:03.344 bw ( KiB/s): min=218112, max=744448, per=9.75%, avg=407372.80, stdev=148513.85, samples=20 00:21:03.344 iops : min= 852, max= 2908, avg=1591.30, stdev=580.13, samples=20 00:21:03.344 lat (msec) : 20=11.21%, 50=63.82%, 100=24.97% 00:21:03.344 cpu : usr=0.28%, sys=3.02%, ctx=3688, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=15976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job5: (groupid=0, jobs=1): err= 0: pid=1676425: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=1362, BW=341MiB/s (357MB/s)(3428MiB/10059msec) 00:21:03.344 slat (usec): min=7, max=66712, avg=691.62, stdev=2501.94 00:21:03.344 clat (msec): min=10, max=168, avg=46.22, stdev=18.68 00:21:03.344 lat (msec): min=10, max=170, avg=46.91, stdev=19.02 00:21:03.344 clat percentiles (msec): 00:21:03.344 | 1.00th=[ 25], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:21:03.344 | 30.00th=[ 32], 40.00th=[ 35], 50.00th=[ 41], 60.00th=[ 47], 00:21:03.344 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 80], 00:21:03.344 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 146], 99.95th=[ 153], 00:21:03.344 | 99.99th=[ 169] 00:21:03.344 bw ( KiB/s): min=211456, max=502784, per=8.36%, avg=349363.20, stdev=100341.04, samples=20 00:21:03.344 iops : min= 826, max= 1964, avg=1364.70, stdev=391.96, samples=20 00:21:03.344 lat (msec) : 20=0.55%, 50=65.49%, 100=31.63%, 250=2.33% 00:21:03.344 cpu : usr=0.28%, sys=3.09%, ctx=3271, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=13710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job6: (groupid=0, jobs=1): err= 0: pid=1676426: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=2267, BW=567MiB/s (594MB/s)(5676MiB/10012msec) 00:21:03.344 slat (usec): min=7, max=16997, avg=396.11, stdev=1078.97 00:21:03.344 clat (usec): min=696, max=76905, avg=27803.77, stdev=12538.79 00:21:03.344 lat (usec): min=727, max=76924, avg=28199.89, stdev=12727.66 00:21:03.344 clat percentiles (usec): 00:21:03.344 | 1.00th=[ 7046], 5.00th=[14222], 10.00th=[14877], 20.00th=[15533], 00:21:03.344 | 30.00th=[16188], 40.00th=[20055], 50.00th=[28181], 60.00th=[30278], 00:21:03.344 | 70.00th=[32375], 80.00th=[42730], 90.00th=[45876], 95.00th=[48497], 00:21:03.344 | 99.00th=[57410], 99.50th=[60556], 99.90th=[64750], 99.95th=[65799], 00:21:03.344 | 99.99th=[72877] 00:21:03.344 bw ( KiB/s): min=299520, max=1060352, per=13.87%, avg=579635.40, stdev=225115.19, samples=20 00:21:03.344 iops : min= 1170, max= 4142, avg=2264.20, stdev=879.36, samples=20 00:21:03.344 lat (usec) : 750=0.01% 00:21:03.344 lat (msec) : 2=0.16%, 4=0.35%, 10=0.89%, 20=38.42%, 50=56.57% 00:21:03.344 lat (msec) : 100=3.60% 00:21:03.344 cpu : usr=0.44%, sys=4.40%, ctx=6053, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=22703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job7: (groupid=0, jobs=1): err= 0: pid=1676427: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=2436, BW=609MiB/s (639MB/s)(6106MiB/10025msec) 00:21:03.344 slat (usec): min=6, max=22624, avg=378.37, stdev=1056.83 00:21:03.344 clat (usec): min=8147, max=71420, avg=25871.70, stdev=11890.32 00:21:03.344 lat (usec): min=8386, max=82108, avg=26250.07, stdev=12067.62 00:21:03.344 clat percentiles (usec): 00:21:03.344 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13698], 20.00th=[14222], 00:21:03.344 | 30.00th=[14746], 40.00th=[17957], 50.00th=[25822], 60.00th=[29230], 00:21:03.344 | 70.00th=[30802], 80.00th=[34341], 90.00th=[44827], 95.00th=[46924], 00:21:03.344 | 99.00th=[56361], 99.50th=[61080], 99.90th=[65274], 99.95th=[66847], 00:21:03.344 | 99.99th=[68682] 00:21:03.344 bw ( KiB/s): min=311808, max=1092608, per=14.92%, avg=623537.80, stdev=244117.22, samples=20 00:21:03.344 iops : min= 1218, max= 4268, avg=2435.65, stdev=953.60, samples=20 00:21:03.344 lat (msec) : 10=0.03%, 20=42.95%, 50=54.20%, 100=2.82% 00:21:03.344 cpu : usr=0.38%, sys=4.22%, ctx=6027, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=24422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job8: (groupid=0, jobs=1): err= 0: pid=1676428: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=1565, BW=391MiB/s (410MB/s)(3936MiB/10058msec) 00:21:03.344 slat (usec): min=7, max=67333, avg=573.49, stdev=1830.32 00:21:03.344 clat (usec): min=1328, max=170679, avg=40275.84, stdev=18422.68 00:21:03.344 lat (usec): min=1358, max=170721, avg=40849.33, stdev=18737.01 00:21:03.344 clat percentiles (msec): 00:21:03.344 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 28], 00:21:03.344 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 41], 60.00th=[ 45], 00:21:03.344 | 70.00th=[ 46], 80.00th=[ 50], 90.00th=[ 67], 95.00th=[ 73], 00:21:03.344 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 136], 99.95th=[ 146], 00:21:03.344 | 99.99th=[ 171] 00:21:03.344 bw ( KiB/s): min=225792, max=864768, per=9.60%, avg=401446.35, stdev=148768.90, samples=20 00:21:03.344 iops : min= 882, max= 3378, avg=1568.10, stdev=581.13, samples=20 00:21:03.344 lat (msec) : 2=0.06%, 4=0.17%, 10=0.09%, 20=12.72%, 50=67.83% 00:21:03.344 lat (msec) : 100=17.95%, 250=1.18% 00:21:03.344 cpu : usr=0.33%, sys=3.51%, ctx=4442, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=15743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job9: (groupid=0, jobs=1): err= 0: pid=1676429: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=960, BW=240MiB/s (252MB/s)(2416MiB/10058msec) 00:21:03.344 slat (usec): min=8, max=43802, avg=918.97, stdev=2790.91 00:21:03.344 clat (msec): min=11, max=148, avg=65.64, stdev=17.40 00:21:03.344 lat (msec): min=11, max=164, avg=66.56, stdev=17.83 00:21:03.344 clat percentiles (msec): 00:21:03.344 | 1.00th=[ 19], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 55], 00:21:03.344 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 00:21:03.344 | 70.00th=[ 73], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 94], 00:21:03.344 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 140], 99.95th=[ 140], 00:21:03.344 | 99.99th=[ 148] 00:21:03.344 bw ( KiB/s): min=161792, max=333312, per=5.88%, avg=245760.00, stdev=46365.65, samples=20 00:21:03.344 iops : min= 632, max= 1302, avg=960.00, stdev=181.12, samples=20 00:21:03.344 lat (msec) : 20=1.30%, 50=14.90%, 100=80.40%, 250=3.39% 00:21:03.344 cpu : usr=0.26%, sys=3.09%, ctx=2762, majf=0, minf=4097 00:21:03.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:03.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.344 issued rwts: total=9663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.344 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.344 job10: (groupid=0, jobs=1): err= 0: pid=1676430: Fri Dec 13 11:15:21 2024 00:21:03.344 read: IOPS=1667, BW=417MiB/s (437MB/s)(4194MiB/10057msec) 00:21:03.344 slat (usec): min=7, max=50456, avg=558.01, stdev=1864.07 00:21:03.344 clat (msec): min=2, max=145, avg=37.78, stdev=25.68 00:21:03.344 lat (msec): min=2, max=158, avg=38.33, stdev=26.09 00:21:03.344 clat percentiles (msec): 00:21:03.344 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:21:03.344 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 30], 60.00th=[ 34], 00:21:03.344 | 70.00th=[ 54], 80.00th=[ 62], 90.00th=[ 77], 95.00th=[ 90], 00:21:03.344 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 142], 00:21:03.344 | 99.99th=[ 146] 00:21:03.344 bw ( KiB/s): min=167424, max=950272, per=10.24%, avg=427801.35, stdev=268440.11, samples=20 00:21:03.344 iops : min= 654, max= 3712, avg=1671.05, stdev=1048.63, samples=20 00:21:03.344 lat (msec) : 4=0.07%, 10=1.14%, 20=39.54%, 50=28.07%, 100=29.17% 00:21:03.344 lat (msec) : 250=2.01% 00:21:03.345 cpu : usr=0.31%, sys=3.03%, ctx=4235, majf=0, minf=4097 00:21:03.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:03.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.345 issued rwts: total=16775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.345 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.345 00:21:03.345 Run status group 0 (all jobs): 00:21:03.345 READ: bw=4082MiB/s (4280MB/s), 240MiB/s-609MiB/s (252MB/s-639MB/s), io=40.1GiB (43.1GB), run=10012-10059msec 00:21:03.345 00:21:03.345 Disk stats (read/write): 00:21:03.345 nvme0n1: ios=21735/0, merge=0/0, ticks=1237923/0, in_queue=1237923, util=97.97% 00:21:03.345 nvme10n1: ios=24379/0, merge=0/0, ticks=1236905/0, in_queue=1236905, util=98.10% 00:21:03.345 nvme1n1: ios=22376/0, merge=0/0, ticks=1240605/0, in_queue=1240605, util=98.29% 00:21:03.345 nvme2n1: ios=21631/0, merge=0/0, ticks=1235829/0, in_queue=1235829, util=98.37% 00:21:03.345 nvme3n1: ios=31892/0, merge=0/0, ticks=1233112/0, in_queue=1233112, util=98.43% 00:21:03.345 nvme4n1: ios=27327/0, merge=0/0, ticks=1232479/0, in_queue=1232479, util=98.65% 00:21:03.345 nvme5n1: ios=43389/0, merge=0/0, ticks=1204960/0, in_queue=1204960, util=98.69% 00:21:03.345 nvme6n1: ios=48753/0, merge=0/0, ticks=1233151/0, in_queue=1233151, util=98.80% 00:21:03.345 nvme7n1: ios=31376/0, merge=0/0, ticks=1231449/0, in_queue=1231449, util=99.05% 00:21:03.345 nvme8n1: ios=19221/0, merge=0/0, ticks=1236960/0, in_queue=1236960, util=99.18% 00:21:03.345 nvme9n1: ios=33442/0, merge=0/0, ticks=1232512/0, in_queue=1232512, util=99.25% 00:21:03.345 11:15:21 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:03.345 [global] 00:21:03.345 thread=1 00:21:03.345 invalidate=1 00:21:03.345 rw=randwrite 00:21:03.345 time_based=1 00:21:03.345 runtime=10 00:21:03.345 ioengine=libaio 00:21:03.345 direct=1 00:21:03.345 bs=262144 00:21:03.345 iodepth=64 00:21:03.345 norandommap=1 00:21:03.345 numjobs=1 00:21:03.345 00:21:03.345 [job0] 00:21:03.345 filename=/dev/nvme0n1 00:21:03.345 [job1] 00:21:03.345 filename=/dev/nvme10n1 00:21:03.345 [job2] 00:21:03.345 filename=/dev/nvme1n1 00:21:03.345 [job3] 00:21:03.345 filename=/dev/nvme2n1 00:21:03.345 [job4] 00:21:03.345 filename=/dev/nvme3n1 00:21:03.345 [job5] 00:21:03.345 filename=/dev/nvme4n1 00:21:03.345 [job6] 00:21:03.345 filename=/dev/nvme5n1 00:21:03.345 [job7] 00:21:03.345 filename=/dev/nvme6n1 00:21:03.345 [job8] 00:21:03.345 filename=/dev/nvme7n1 00:21:03.345 [job9] 00:21:03.345 filename=/dev/nvme8n1 00:21:03.345 [job10] 00:21:03.345 filename=/dev/nvme9n1 00:21:03.345 Could not set queue depth (nvme0n1) 00:21:03.345 Could not set queue depth (nvme10n1) 00:21:03.345 Could not set queue depth (nvme1n1) 00:21:03.345 Could not set queue depth (nvme2n1) 00:21:03.345 Could not set queue depth (nvme3n1) 00:21:03.345 Could not set queue depth (nvme4n1) 00:21:03.345 Could not set queue depth (nvme5n1) 00:21:03.345 Could not set queue depth (nvme6n1) 00:21:03.345 Could not set queue depth (nvme7n1) 00:21:03.345 Could not set queue depth (nvme8n1) 00:21:03.345 Could not set queue depth (nvme9n1) 00:21:03.345 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:03.345 fio-3.35 00:21:03.345 Starting 11 threads 00:21:13.308 00:21:13.308 job0: (groupid=0, jobs=1): err= 0: pid=1678179: Fri Dec 13 11:15:32 2024 00:21:13.308 write: IOPS=1166, BW=292MiB/s (306MB/s)(2925MiB/10030msec); 0 zone resets 00:21:13.308 slat (usec): min=12, max=32182, avg=807.54, stdev=1996.83 00:21:13.308 clat (usec): min=702, max=142622, avg=54051.61, stdev=26537.99 00:21:13.309 lat (usec): min=763, max=152064, avg=54859.15, stdev=26970.36 00:21:13.309 clat percentiles (msec): 00:21:13.309 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 27], 00:21:13.309 | 30.00th=[ 43], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 61], 00:21:13.309 | 70.00th=[ 67], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 95], 00:21:13.309 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:21:13.309 | 99.99th=[ 144] 00:21:13.309 bw ( KiB/s): min=141082, max=818688, per=8.42%, avg=297852.00, stdev=154498.43, samples=20 00:21:13.309 iops : min= 551, max= 3198, avg=1163.45, stdev=603.54, samples=20 00:21:13.309 lat (usec) : 750=0.04%, 1000=0.06% 00:21:13.309 lat (msec) : 2=0.50%, 4=0.80%, 10=1.47%, 20=11.94%, 50=27.52% 00:21:13.309 lat (msec) : 100=53.95%, 250=3.72% 00:21:13.309 cpu : usr=2.25%, sys=3.10%, ctx=3131, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,11698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job1: (groupid=0, jobs=1): err= 0: pid=1678199: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1810, BW=453MiB/s (474MB/s)(4542MiB/10037msec); 0 zone resets 00:21:13.309 slat (usec): min=24, max=9998, avg=541.92, stdev=1148.56 00:21:13.309 clat (usec): min=2989, max=95339, avg=34808.60, stdev=16840.39 00:21:13.309 lat (usec): min=3053, max=95422, avg=35350.52, stdev=17090.51 00:21:13.309 clat percentiles (usec): 00:21:13.309 | 1.00th=[12518], 5.00th=[16581], 10.00th=[17171], 20.00th=[17957], 00:21:13.309 | 30.00th=[18482], 40.00th=[22938], 50.00th=[35390], 60.00th=[37487], 00:21:13.309 | 70.00th=[46924], 80.00th=[50594], 90.00th=[55313], 95.00th=[64226], 00:21:13.309 | 99.00th=[78119], 99.50th=[84411], 99.90th=[90702], 99.95th=[91751], 00:21:13.309 | 99.99th=[93848] 00:21:13.309 bw ( KiB/s): min=254464, max=903680, per=13.09%, avg=463373.05, stdev=229922.77, samples=20 00:21:13.309 iops : min= 994, max= 3530, avg=1810.05, stdev=898.13, samples=20 00:21:13.309 lat (msec) : 4=0.01%, 10=0.47%, 20=38.24%, 50=39.24%, 100=22.04% 00:21:13.309 cpu : usr=4.43%, sys=4.56%, ctx=3814, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,18167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job2: (groupid=0, jobs=1): err= 0: pid=1678210: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1435, BW=359MiB/s (376MB/s)(3602MiB/10036msec); 0 zone resets 00:21:13.309 slat (usec): min=12, max=98952, avg=656.56, stdev=1698.16 00:21:13.309 clat (usec): min=705, max=215911, avg=43907.16, stdev=23436.06 00:21:13.309 lat (usec): min=981, max=215963, avg=44563.71, stdev=23777.59 00:21:13.309 clat percentiles (msec): 00:21:13.309 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 19], 00:21:13.309 | 30.00th=[ 28], 40.00th=[ 34], 50.00th=[ 47], 60.00th=[ 51], 00:21:13.309 | 70.00th=[ 54], 80.00th=[ 60], 90.00th=[ 77], 95.00th=[ 84], 00:21:13.309 | 99.00th=[ 120], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 159], 00:21:13.309 | 99.99th=[ 215] 00:21:13.309 bw ( KiB/s): min=200704, max=855552, per=10.38%, avg=367240.55, stdev=183031.45, samples=20 00:21:13.309 iops : min= 784, max= 3342, avg=1434.50, stdev=714.97, samples=20 00:21:13.309 lat (usec) : 750=0.01%, 1000=0.01% 00:21:13.309 lat (msec) : 2=0.28%, 4=0.46%, 10=0.68%, 20=21.32%, 50=36.56% 00:21:13.309 lat (msec) : 100=39.39%, 250=1.29% 00:21:13.309 cpu : usr=3.06%, sys=3.65%, ctx=3640, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,14409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job3: (groupid=0, jobs=1): err= 0: pid=1678216: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1164, BW=291MiB/s (305MB/s)(2921MiB/10036msec); 0 zone resets 00:21:13.309 slat (usec): min=15, max=38621, avg=703.19, stdev=1800.32 00:21:13.309 clat (usec): min=961, max=134025, avg=54263.67, stdev=25934.24 00:21:13.309 lat (usec): min=1021, max=140185, avg=54966.86, stdev=26285.51 00:21:13.309 clat percentiles (msec): 00:21:13.309 | 1.00th=[ 10], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 32], 00:21:13.309 | 30.00th=[ 40], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 54], 00:21:13.309 | 70.00th=[ 67], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 97], 00:21:13.309 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 133], 00:21:13.309 | 99.99th=[ 133] 00:21:13.309 bw ( KiB/s): min=172032, max=702464, per=8.40%, avg=297448.85, stdev=135713.46, samples=20 00:21:13.309 iops : min= 672, max= 2744, avg=1161.85, stdev=530.18, samples=20 00:21:13.309 lat (usec) : 1000=0.02% 00:21:13.309 lat (msec) : 2=0.06%, 4=0.25%, 10=0.70%, 20=9.54%, 50=38.37% 00:21:13.309 lat (msec) : 100=47.00%, 250=4.07% 00:21:13.309 cpu : usr=2.48%, sys=3.09%, ctx=3589, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,11682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job4: (groupid=0, jobs=1): err= 0: pid=1678219: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1028, BW=257MiB/s (270MB/s)(2588MiB/10060msec); 0 zone resets 00:21:13.309 slat (usec): min=19, max=24092, avg=835.86, stdev=1881.17 00:21:13.309 clat (msec): min=7, max=141, avg=61.34, stdev=22.22 00:21:13.309 lat (msec): min=7, max=147, avg=62.18, stdev=22.61 00:21:13.309 clat percentiles (msec): 00:21:13.309 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 41], 00:21:13.309 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 67], 00:21:13.309 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 99], 00:21:13.309 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 136], 00:21:13.309 | 99.99th=[ 142] 00:21:13.309 bw ( KiB/s): min=133386, max=419328, per=7.44%, avg=263367.95, stdev=81345.97, samples=20 00:21:13.309 iops : min= 521, max= 1638, avg=1028.75, stdev=317.79, samples=20 00:21:13.309 lat (msec) : 10=0.13%, 20=0.71%, 50=36.04%, 100=58.66%, 250=4.47% 00:21:13.309 cpu : usr=2.22%, sys=3.06%, ctx=3088, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,10351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job5: (groupid=0, jobs=1): err= 0: pid=1678225: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1503, BW=376MiB/s (394MB/s)(3766MiB/10015msec); 0 zone resets 00:21:13.309 slat (usec): min=12, max=44669, avg=568.95, stdev=1534.77 00:21:13.309 clat (usec): min=340, max=121309, avg=41975.43, stdev=19849.32 00:21:13.309 lat (usec): min=405, max=126489, avg=42544.38, stdev=20154.50 00:21:13.309 clat percentiles (usec): 00:21:13.309 | 1.00th=[ 865], 5.00th=[ 3490], 10.00th=[ 9765], 20.00th=[ 30802], 00:21:13.309 | 30.00th=[ 35390], 40.00th=[ 36963], 50.00th=[ 43254], 60.00th=[ 49021], 00:21:13.309 | 70.00th=[ 52167], 80.00th=[ 55313], 90.00th=[ 64226], 95.00th=[ 74974], 00:21:13.309 | 99.00th=[ 89654], 99.50th=[ 94897], 99.90th=[111674], 99.95th=[117965], 00:21:13.309 | 99.99th=[120062] 00:21:13.309 bw ( KiB/s): min=214016, max=723968, per=10.85%, avg=383991.50, stdev=109383.98, samples=20 00:21:13.309 iops : min= 836, max= 2828, avg=1499.90, stdev=427.28, samples=20 00:21:13.309 lat (usec) : 500=0.04%, 750=0.71%, 1000=0.62% 00:21:13.309 lat (msec) : 2=1.69%, 4=2.39%, 10=4.65%, 20=5.30%, 50=47.54% 00:21:13.309 lat (msec) : 100=36.66%, 250=0.39% 00:21:13.309 cpu : usr=2.97%, sys=3.88%, ctx=4639, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,15062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job6: (groupid=0, jobs=1): err= 0: pid=1678232: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1154, BW=289MiB/s (303MB/s)(2902MiB/10058msec); 0 zone resets 00:21:13.309 slat (usec): min=17, max=64343, avg=797.42, stdev=1743.92 00:21:13.309 clat (msec): min=3, max=186, avg=54.64, stdev=17.91 00:21:13.309 lat (msec): min=3, max=186, avg=55.44, stdev=18.16 00:21:13.309 clat percentiles (msec): 00:21:13.309 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 41], 00:21:13.309 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 54], 60.00th=[ 56], 00:21:13.309 | 70.00th=[ 59], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 84], 00:21:13.309 | 99.00th=[ 118], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 146], 00:21:13.309 | 99.99th=[ 186] 00:21:13.309 bw ( KiB/s): min=200704, max=513024, per=8.35%, avg=295556.40, stdev=75719.42, samples=20 00:21:13.309 iops : min= 784, max= 2004, avg=1154.50, stdev=295.78, samples=20 00:21:13.309 lat (msec) : 4=0.02%, 10=0.20%, 20=2.59%, 50=31.78%, 100=63.98% 00:21:13.309 lat (msec) : 250=1.43% 00:21:13.309 cpu : usr=2.55%, sys=3.13%, ctx=3110, majf=0, minf=1 00:21:13.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:13.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.309 issued rwts: total=0,11607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.309 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.309 job7: (groupid=0, jobs=1): err= 0: pid=1678238: Fri Dec 13 11:15:32 2024 00:21:13.309 write: IOPS=1266, BW=317MiB/s (332MB/s)(3185MiB/10058msec); 0 zone resets 00:21:13.309 slat (usec): min=9, max=57118, avg=585.51, stdev=1722.39 00:21:13.309 clat (usec): min=431, max=158057, avg=49926.18, stdev=24215.48 00:21:13.309 lat (usec): min=464, max=158102, avg=50511.69, stdev=24580.29 00:21:13.309 clat percentiles (usec): 00:21:13.309 | 1.00th=[ 1221], 5.00th=[ 9241], 10.00th=[ 22414], 20.00th=[ 31327], 00:21:13.309 | 30.00th=[ 34341], 40.00th=[ 43254], 50.00th=[ 49021], 60.00th=[ 54789], 00:21:13.310 | 70.00th=[ 62129], 80.00th=[ 67634], 90.00th=[ 79168], 95.00th=[ 91751], 00:21:13.310 | 99.00th=[122160], 99.50th=[128451], 99.90th=[137364], 99.95th=[137364], 00:21:13.310 | 99.99th=[158335] 00:21:13.310 bw ( KiB/s): min=137490, max=631296, per=9.17%, avg=324471.05, stdev=120983.55, samples=20 00:21:13.310 iops : min= 537, max= 2466, avg=1267.45, stdev=472.58, samples=20 00:21:13.310 lat (usec) : 500=0.02%, 750=0.30%, 1000=0.38% 00:21:13.310 lat (msec) : 2=1.53%, 4=1.19%, 10=1.78%, 20=3.29%, 50=43.55% 00:21:13.310 lat (msec) : 100=44.06%, 250=3.91% 00:21:13.310 cpu : usr=2.12%, sys=3.80%, ctx=4510, majf=0, minf=1 00:21:13.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:13.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.310 issued rwts: total=0,12739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.310 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.310 job8: (groupid=0, jobs=1): err= 0: pid=1678255: Fri Dec 13 11:15:32 2024 00:21:13.310 write: IOPS=1006, BW=252MiB/s (264MB/s)(2531MiB/10059msec); 0 zone resets 00:21:13.310 slat (usec): min=16, max=51210, avg=924.94, stdev=2057.69 00:21:13.310 clat (usec): min=468, max=146865, avg=62654.16, stdev=23844.50 00:21:13.310 lat (usec): min=504, max=146914, avg=63579.10, stdev=24237.02 00:21:13.310 clat percentiles (msec): 00:21:13.310 | 1.00th=[ 15], 5.00th=[ 23], 10.00th=[ 30], 20.00th=[ 42], 00:21:13.310 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 66], 60.00th=[ 70], 00:21:13.310 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 91], 95.00th=[ 100], 00:21:13.310 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 138], 00:21:13.310 | 99.99th=[ 148] 00:21:13.310 bw ( KiB/s): min=134925, max=511488, per=7.28%, avg=257505.65, stdev=90377.75, samples=20 00:21:13.310 iops : min= 527, max= 1998, avg=1005.85, stdev=353.07, samples=20 00:21:13.310 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.07% 00:21:13.310 lat (msec) : 2=0.43%, 4=0.27%, 20=3.37%, 50=26.36%, 100=64.84% 00:21:13.310 lat (msec) : 250=4.63% 00:21:13.310 cpu : usr=1.95%, sys=3.02%, ctx=2827, majf=0, minf=1 00:21:13.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:13.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.310 issued rwts: total=0,10122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.310 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.310 job9: (groupid=0, jobs=1): err= 0: pid=1678269: Fri Dec 13 11:15:32 2024 00:21:13.310 write: IOPS=1003, BW=251MiB/s (263MB/s)(2524MiB/10061msec); 0 zone resets 00:21:13.310 slat (usec): min=16, max=57213, avg=851.70, stdev=2233.52 00:21:13.310 clat (usec): min=204, max=159678, avg=62895.57, stdev=26630.01 00:21:13.310 lat (usec): min=229, max=159732, avg=63747.27, stdev=27076.57 00:21:13.310 clat percentiles (usec): 00:21:13.310 | 1.00th=[ 1942], 5.00th=[ 10945], 10.00th=[ 19530], 20.00th=[ 42206], 00:21:13.310 | 30.00th=[ 54789], 40.00th=[ 60556], 50.00th=[ 65799], 60.00th=[ 71828], 00:21:13.310 | 70.00th=[ 78119], 80.00th=[ 85459], 90.00th=[ 90702], 95.00th=[100140], 00:21:13.310 | 99.00th=[123208], 99.50th=[128451], 99.90th=[135267], 99.95th=[137364], 00:21:13.310 | 99.99th=[154141] 00:21:13.310 bw ( KiB/s): min=127488, max=404992, per=7.26%, avg=256888.45, stdev=82855.76, samples=20 00:21:13.310 iops : min= 498, max= 1582, avg=1003.45, stdev=323.68, samples=20 00:21:13.310 lat (usec) : 250=0.03%, 500=0.06%, 750=0.06%, 1000=0.08% 00:21:13.310 lat (msec) : 2=0.82%, 4=0.84%, 10=2.58%, 20=5.92%, 50=13.84% 00:21:13.310 lat (msec) : 100=70.81%, 250=4.96% 00:21:13.310 cpu : usr=2.22%, sys=2.97%, ctx=3204, majf=0, minf=1 00:21:13.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:13.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.310 issued rwts: total=0,10097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.310 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.310 job10: (groupid=0, jobs=1): err= 0: pid=1678275: Fri Dec 13 11:15:32 2024 00:21:13.310 write: IOPS=1312, BW=328MiB/s (344MB/s)(3291MiB/10031msec); 0 zone resets 00:21:13.310 slat (usec): min=13, max=30832, avg=670.11, stdev=1654.69 00:21:13.310 clat (usec): min=427, max=112430, avg=48082.54, stdev=19571.99 00:21:13.310 lat (usec): min=480, max=115813, avg=48752.65, stdev=19905.43 00:21:13.310 clat percentiles (msec): 00:21:13.310 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 32], 00:21:13.310 | 30.00th=[ 35], 40.00th=[ 45], 50.00th=[ 52], 60.00th=[ 54], 00:21:13.310 | 70.00th=[ 57], 80.00th=[ 64], 90.00th=[ 75], 95.00th=[ 82], 00:21:13.310 | 99.00th=[ 90], 99.50th=[ 93], 99.90th=[ 102], 99.95th=[ 111], 00:21:13.310 | 99.99th=[ 113] 00:21:13.310 bw ( KiB/s): min=201216, max=654848, per=9.48%, avg=335399.95, stdev=122801.49, samples=20 00:21:13.310 iops : min= 786, max= 2558, avg=1310.10, stdev=479.68, samples=20 00:21:13.310 lat (usec) : 500=0.03%, 750=0.04%, 1000=0.05% 00:21:13.310 lat (msec) : 2=0.38%, 4=0.56%, 10=0.81%, 20=7.10%, 50=39.11% 00:21:13.310 lat (msec) : 100=51.74%, 250=0.18% 00:21:13.310 cpu : usr=2.57%, sys=3.52%, ctx=3713, majf=0, minf=1 00:21:13.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:13.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:13.310 issued rwts: total=0,13164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.310 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:13.310 00:21:13.310 Run status group 0 (all jobs): 00:21:13.310 WRITE: bw=3456MiB/s (3624MB/s), 251MiB/s-453MiB/s (263MB/s-474MB/s), io=34.0GiB (36.5GB), run=10015-10061msec 00:21:13.310 00:21:13.310 Disk stats (read/write): 00:21:13.310 nvme0n1: ios=49/23221, merge=0/0, ticks=10/1231459, in_queue=1231469, util=97.75% 00:21:13.310 nvme10n1: ios=0/36177, merge=0/0, ticks=0/1231382, in_queue=1231382, util=97.84% 00:21:13.310 nvme1n1: ios=0/28654, merge=0/0, ticks=0/1231828, in_queue=1231828, util=98.05% 00:21:13.310 nvme2n1: ios=0/23195, merge=0/0, ticks=0/1235785, in_queue=1235785, util=98.16% 00:21:13.310 nvme3n1: ios=0/20550, merge=0/0, ticks=0/1232761, in_queue=1232761, util=98.21% 00:21:13.310 nvme4n1: ios=0/29861, merge=0/0, ticks=0/1238125, in_queue=1238125, util=98.43% 00:21:13.310 nvme5n1: ios=0/23060, merge=0/0, ticks=0/1232111, in_queue=1232111, util=98.53% 00:21:13.310 nvme6n1: ios=0/25328, merge=0/0, ticks=0/1238239, in_queue=1238239, util=98.61% 00:21:13.310 nvme7n1: ios=0/20090, merge=0/0, ticks=0/1230548, in_queue=1230548, util=98.86% 00:21:13.310 nvme8n1: ios=0/20035, merge=0/0, ticks=0/1234879, in_queue=1234879, util=98.98% 00:21:13.310 nvme9n1: ios=0/26151, merge=0/0, ticks=0/1235165, in_queue=1235165, util=99.08% 00:21:13.310 11:15:32 -- target/multiconnection.sh@36 -- # sync 00:21:13.310 11:15:32 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:13.310 11:15:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.310 11:15:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:13.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.310 11:15:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:13.310 11:15:33 -- common/autotest_common.sh@1208 -- # local i=0 00:21:13.310 11:15:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:13.310 11:15:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:21:13.310 11:15:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:13.310 11:15:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:21:13.310 11:15:33 -- common/autotest_common.sh@1220 -- # return 0 00:21:13.310 11:15:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.310 11:15:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.310 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:21:13.310 11:15:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.310 11:15:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.310 11:15:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:14.243 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:14.243 11:15:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:14.243 11:15:34 -- common/autotest_common.sh@1208 -- # local i=0 00:21:14.243 11:15:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:14.243 11:15:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:21:14.500 11:15:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:14.500 11:15:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:21:14.500 11:15:34 -- common/autotest_common.sh@1220 -- # return 0 00:21:14.500 11:15:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:14.500 11:15:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.500 11:15:34 -- common/autotest_common.sh@10 -- # set +x 00:21:14.500 11:15:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.500 11:15:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:14.500 11:15:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:15.431 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:15.431 11:15:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:15.431 11:15:35 -- common/autotest_common.sh@1208 -- # local i=0 00:21:15.431 11:15:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:15.431 11:15:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:21:15.431 11:15:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:21:15.431 11:15:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:15.431 11:15:35 -- common/autotest_common.sh@1220 -- # return 0 00:21:15.431 11:15:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:15.431 11:15:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.431 11:15:35 -- common/autotest_common.sh@10 -- # set +x 00:21:15.431 11:15:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.431 11:15:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.431 11:15:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:16.360 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:16.360 11:15:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:16.360 11:15:36 -- common/autotest_common.sh@1208 -- # local i=0 00:21:16.360 11:15:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:16.360 11:15:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:21:16.360 11:15:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:16.360 11:15:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:21:16.360 11:15:36 -- common/autotest_common.sh@1220 -- # return 0 00:21:16.360 11:15:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:16.360 11:15:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.360 11:15:36 -- common/autotest_common.sh@10 -- # set +x 00:21:16.360 11:15:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.360 11:15:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:16.360 11:15:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:17.292 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:17.292 11:15:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:17.292 11:15:37 -- common/autotest_common.sh@1208 -- # local i=0 00:21:17.292 11:15:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:17.292 11:15:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:21:17.292 11:15:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:17.292 11:15:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:21:17.292 11:15:37 -- common/autotest_common.sh@1220 -- # return 0 00:21:17.292 11:15:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:17.292 11:15:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.292 11:15:37 -- common/autotest_common.sh@10 -- # set +x 00:21:17.292 11:15:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.292 11:15:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.293 11:15:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:18.225 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:18.225 11:15:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:18.225 11:15:38 -- common/autotest_common.sh@1208 -- # local i=0 00:21:18.225 11:15:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:18.225 11:15:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:21:18.225 11:15:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:18.225 11:15:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:21:18.225 11:15:38 -- common/autotest_common.sh@1220 -- # return 0 00:21:18.225 11:15:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:18.225 11:15:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.225 11:15:38 -- common/autotest_common.sh@10 -- # set +x 00:21:18.225 11:15:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.225 11:15:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:18.482 11:15:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:19.414 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:19.415 11:15:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:19.415 11:15:39 -- common/autotest_common.sh@1208 -- # local i=0 00:21:19.415 11:15:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:19.415 11:15:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:21:19.415 11:15:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:21:19.415 11:15:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:19.415 11:15:39 -- common/autotest_common.sh@1220 -- # return 0 00:21:19.415 11:15:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:19.415 11:15:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.415 11:15:39 -- common/autotest_common.sh@10 -- # set +x 00:21:19.415 11:15:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.415 11:15:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.415 11:15:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:20.381 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:20.381 11:15:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:20.381 11:15:40 -- common/autotest_common.sh@1208 -- # local i=0 00:21:20.381 11:15:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:20.381 11:15:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:21:20.381 11:15:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:20.381 11:15:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:21:20.381 11:15:40 -- common/autotest_common.sh@1220 -- # return 0 00:21:20.381 11:15:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:20.381 11:15:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.381 11:15:40 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 11:15:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.381 11:15:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:20.381 11:15:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:21.312 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:21.312 11:15:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:21.312 11:15:41 -- common/autotest_common.sh@1208 -- # local i=0 00:21:21.312 11:15:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:21.312 11:15:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:21:21.312 11:15:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:21.312 11:15:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:21:21.312 11:15:41 -- common/autotest_common.sh@1220 -- # return 0 00:21:21.312 11:15:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:21.312 11:15:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.312 11:15:41 -- common/autotest_common.sh@10 -- # set +x 00:21:21.312 11:15:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.312 11:15:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.312 11:15:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:22.243 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:22.243 11:15:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:22.243 11:15:42 -- common/autotest_common.sh@1208 -- # local i=0 00:21:22.243 11:15:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:22.243 11:15:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:21:22.243 11:15:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:22.243 11:15:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:21:22.243 11:15:42 -- common/autotest_common.sh@1220 -- # return 0 00:21:22.243 11:15:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:22.243 11:15:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.243 11:15:42 -- common/autotest_common.sh@10 -- # set +x 00:21:22.243 11:15:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.243 11:15:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.243 11:15:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:23.174 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:23.174 11:15:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:23.174 11:15:43 -- common/autotest_common.sh@1208 -- # local i=0 00:21:23.174 11:15:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:21:23.174 11:15:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:21:23.174 11:15:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:21:23.174 11:15:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:21:23.174 11:15:43 -- common/autotest_common.sh@1220 -- # return 0 00:21:23.174 11:15:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:23.174 11:15:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.174 11:15:43 -- common/autotest_common.sh@10 -- # set +x 00:21:23.174 11:15:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.174 11:15:43 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:23.174 11:15:43 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:23.174 11:15:43 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:23.174 11:15:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:23.174 11:15:43 -- nvmf/common.sh@116 -- # sync 00:21:23.174 11:15:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:23.174 11:15:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:23.174 11:15:43 -- nvmf/common.sh@119 -- # set +e 00:21:23.174 11:15:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:23.174 11:15:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:23.174 rmmod nvme_rdma 00:21:23.432 rmmod nvme_fabrics 00:21:23.432 11:15:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:23.432 11:15:43 -- nvmf/common.sh@123 -- # set -e 00:21:23.432 11:15:43 -- nvmf/common.sh@124 -- # return 0 00:21:23.432 11:15:43 -- nvmf/common.sh@477 -- # '[' -n 1668985 ']' 00:21:23.432 11:15:43 -- nvmf/common.sh@478 -- # killprocess 1668985 00:21:23.432 11:15:43 -- common/autotest_common.sh@936 -- # '[' -z 1668985 ']' 00:21:23.432 11:15:43 -- common/autotest_common.sh@940 -- # kill -0 1668985 00:21:23.432 11:15:43 -- common/autotest_common.sh@941 -- # uname 00:21:23.432 11:15:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.432 11:15:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1668985 00:21:23.432 11:15:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.432 11:15:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.432 11:15:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1668985' 00:21:23.432 killing process with pid 1668985 00:21:23.432 11:15:43 -- common/autotest_common.sh@955 -- # kill 1668985 00:21:23.432 11:15:43 -- common/autotest_common.sh@960 -- # wait 1668985 00:21:23.999 11:15:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:23.999 11:15:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:23.999 00:21:23.999 real 1m13.394s 00:21:23.999 user 4m47.466s 00:21:23.999 sys 0m16.276s 00:21:23.999 11:15:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:23.999 11:15:44 -- common/autotest_common.sh@10 -- # set +x 00:21:23.999 ************************************ 00:21:23.999 END TEST nvmf_multiconnection 00:21:23.999 ************************************ 00:21:23.999 11:15:44 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:21:23.999 11:15:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:23.999 11:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.999 11:15:44 -- common/autotest_common.sh@10 -- # set +x 00:21:23.999 ************************************ 00:21:23.999 START TEST nvmf_initiator_timeout 00:21:23.999 ************************************ 00:21:23.999 11:15:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:21:23.999 * Looking for test storage... 00:21:23.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:23.999 11:15:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:23.999 11:15:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:23.999 11:15:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:23.999 11:15:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:23.999 11:15:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:23.999 11:15:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:23.999 11:15:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:23.999 11:15:44 -- scripts/common.sh@335 -- # IFS=.-: 00:21:23.999 11:15:44 -- scripts/common.sh@335 -- # read -ra ver1 00:21:23.999 11:15:44 -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.999 11:15:44 -- scripts/common.sh@336 -- # read -ra ver2 00:21:23.999 11:15:44 -- scripts/common.sh@337 -- # local 'op=<' 00:21:23.999 11:15:44 -- scripts/common.sh@339 -- # ver1_l=2 00:21:23.999 11:15:44 -- scripts/common.sh@340 -- # ver2_l=1 00:21:23.999 11:15:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:23.999 11:15:44 -- scripts/common.sh@343 -- # case "$op" in 00:21:23.999 11:15:44 -- scripts/common.sh@344 -- # : 1 00:21:23.999 11:15:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:23.999 11:15:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.999 11:15:44 -- scripts/common.sh@364 -- # decimal 1 00:21:23.999 11:15:44 -- scripts/common.sh@352 -- # local d=1 00:21:23.999 11:15:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.999 11:15:44 -- scripts/common.sh@354 -- # echo 1 00:21:23.999 11:15:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:23.999 11:15:44 -- scripts/common.sh@365 -- # decimal 2 00:21:23.999 11:15:44 -- scripts/common.sh@352 -- # local d=2 00:21:23.999 11:15:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.999 11:15:44 -- scripts/common.sh@354 -- # echo 2 00:21:23.999 11:15:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:23.999 11:15:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:23.999 11:15:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:23.999 11:15:44 -- scripts/common.sh@367 -- # return 0 00:21:23.999 11:15:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.999 11:15:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.999 --rc genhtml_branch_coverage=1 00:21:23.999 --rc genhtml_function_coverage=1 00:21:23.999 --rc genhtml_legend=1 00:21:23.999 --rc geninfo_all_blocks=1 00:21:23.999 --rc geninfo_unexecuted_blocks=1 00:21:23.999 00:21:23.999 ' 00:21:23.999 11:15:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.999 --rc genhtml_branch_coverage=1 00:21:23.999 --rc genhtml_function_coverage=1 00:21:23.999 --rc genhtml_legend=1 00:21:23.999 --rc geninfo_all_blocks=1 00:21:23.999 --rc geninfo_unexecuted_blocks=1 00:21:23.999 00:21:23.999 ' 00:21:23.999 11:15:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.999 --rc genhtml_branch_coverage=1 00:21:23.999 --rc genhtml_function_coverage=1 00:21:23.999 --rc genhtml_legend=1 00:21:23.999 --rc geninfo_all_blocks=1 00:21:23.999 --rc geninfo_unexecuted_blocks=1 00:21:23.999 00:21:23.999 ' 00:21:23.999 11:15:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.999 --rc genhtml_branch_coverage=1 00:21:23.999 --rc genhtml_function_coverage=1 00:21:23.999 --rc genhtml_legend=1 00:21:23.999 --rc geninfo_all_blocks=1 00:21:23.999 --rc geninfo_unexecuted_blocks=1 00:21:23.999 00:21:23.999 ' 00:21:23.999 11:15:44 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.999 11:15:44 -- nvmf/common.sh@7 -- # uname -s 00:21:23.999 11:15:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.999 11:15:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.999 11:15:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.999 11:15:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.999 11:15:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.999 11:15:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.999 11:15:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.999 11:15:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.999 11:15:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.999 11:15:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.000 11:15:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:24.000 11:15:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:24.000 11:15:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.000 11:15:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.000 11:15:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.000 11:15:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:24.000 11:15:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.000 11:15:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.000 11:15:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.000 11:15:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.000 11:15:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.000 11:15:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.000 11:15:44 -- paths/export.sh@5 -- # export PATH 00:21:24.000 11:15:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.000 11:15:44 -- nvmf/common.sh@46 -- # : 0 00:21:24.000 11:15:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:24.000 11:15:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:24.000 11:15:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:24.000 11:15:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.000 11:15:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.000 11:15:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:24.000 11:15:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:24.000 11:15:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:24.000 11:15:44 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:24.000 11:15:44 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:24.000 11:15:44 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:24.000 11:15:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:24.000 11:15:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.000 11:15:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:24.000 11:15:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:24.000 11:15:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:24.000 11:15:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.000 11:15:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.000 11:15:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.000 11:15:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:24.000 11:15:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:24.000 11:15:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:24.000 11:15:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.267 11:15:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.267 11:15:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:29.267 11:15:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:29.267 11:15:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:29.267 11:15:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:29.267 11:15:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:29.267 11:15:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:29.267 11:15:49 -- nvmf/common.sh@294 -- # net_devs=() 00:21:29.267 11:15:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:29.267 11:15:49 -- nvmf/common.sh@295 -- # e810=() 00:21:29.267 11:15:49 -- nvmf/common.sh@295 -- # local -ga e810 00:21:29.267 11:15:49 -- nvmf/common.sh@296 -- # x722=() 00:21:29.267 11:15:49 -- nvmf/common.sh@296 -- # local -ga x722 00:21:29.267 11:15:49 -- nvmf/common.sh@297 -- # mlx=() 00:21:29.267 11:15:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:29.267 11:15:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.267 11:15:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:29.267 11:15:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:29.267 11:15:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:29.267 11:15:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:29.267 11:15:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:29.267 11:15:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:29.267 11:15:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:29.268 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:29.268 11:15:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.268 11:15:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:29.268 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:29.268 11:15:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.268 11:15:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.268 11:15:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.268 11:15:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:29.268 Found net devices under 0000:18:00.0: mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.268 11:15:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.268 11:15:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.268 11:15:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:29.268 Found net devices under 0000:18:00.1: mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.268 11:15:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:29.268 11:15:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:29.268 11:15:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:29.268 11:15:49 -- nvmf/common.sh@57 -- # uname 00:21:29.268 11:15:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:29.268 11:15:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:29.268 11:15:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:29.268 11:15:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:29.268 11:15:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:29.268 11:15:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:29.268 11:15:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:29.268 11:15:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:29.268 11:15:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:29.268 11:15:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:29.268 11:15:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:29.268 11:15:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.268 11:15:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:29.268 11:15:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:29.268 11:15:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.268 11:15:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@104 -- # continue 2 00:21:29.268 11:15:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@104 -- # continue 2 00:21:29.268 11:15:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:29.268 11:15:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.268 11:15:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:29.268 11:15:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:29.268 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.268 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:29.268 altname enp24s0f0np0 00:21:29.268 altname ens785f0np0 00:21:29.268 inet 192.168.100.8/24 scope global mlx_0_0 00:21:29.268 valid_lft forever preferred_lft forever 00:21:29.268 11:15:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:29.268 11:15:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.268 11:15:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:29.268 11:15:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:29.268 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.268 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:29.268 altname enp24s0f1np1 00:21:29.268 altname ens785f1np1 00:21:29.268 inet 192.168.100.9/24 scope global mlx_0_1 00:21:29.268 valid_lft forever preferred_lft forever 00:21:29.268 11:15:49 -- nvmf/common.sh@410 -- # return 0 00:21:29.268 11:15:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:29.268 11:15:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.268 11:15:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:29.268 11:15:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:29.268 11:15:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.268 11:15:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:29.268 11:15:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:29.268 11:15:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.268 11:15:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:29.268 11:15:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@104 -- # continue 2 00:21:29.268 11:15:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.268 11:15:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.268 11:15:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@104 -- # continue 2 00:21:29.268 11:15:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:29.268 11:15:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.268 11:15:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:29.268 11:15:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.268 11:15:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.268 11:15:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:29.268 192.168.100.9' 00:21:29.268 11:15:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:29.268 192.168.100.9' 00:21:29.268 11:15:49 -- nvmf/common.sh@445 -- # head -n 1 00:21:29.268 11:15:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:29.268 11:15:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:29.268 192.168.100.9' 00:21:29.268 11:15:49 -- nvmf/common.sh@446 -- # tail -n +2 00:21:29.268 11:15:49 -- nvmf/common.sh@446 -- # head -n 1 00:21:29.268 11:15:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:29.268 11:15:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:29.268 11:15:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.268 11:15:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:29.268 11:15:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:29.268 11:15:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:29.268 11:15:49 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:29.268 11:15:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:29.268 11:15:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.268 11:15:49 -- common/autotest_common.sh@10 -- # set +x 00:21:29.268 11:15:49 -- nvmf/common.sh@469 -- # nvmfpid=1685270 00:21:29.268 11:15:49 -- nvmf/common.sh@470 -- # waitforlisten 1685270 00:21:29.268 11:15:49 -- common/autotest_common.sh@829 -- # '[' -z 1685270 ']' 00:21:29.268 11:15:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.268 11:15:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.268 11:15:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.268 11:15:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.269 11:15:49 -- common/autotest_common.sh@10 -- # set +x 00:21:29.269 11:15:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.269 [2024-12-13 11:15:49.757506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:29.269 [2024-12-13 11:15:49.757550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.269 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.269 [2024-12-13 11:15:49.807636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.526 [2024-12-13 11:15:49.880464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.526 [2024-12-13 11:15:49.880563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.526 [2024-12-13 11:15:49.880572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.526 [2024-12-13 11:15:49.880577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.526 [2024-12-13 11:15:49.880617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.526 [2024-12-13 11:15:49.880635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.526 [2024-12-13 11:15:49.880719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.526 [2024-12-13 11:15:49.880721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.091 11:15:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.091 11:15:50 -- common/autotest_common.sh@862 -- # return 0 00:21:30.091 11:15:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.091 11:15:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.091 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.091 11:15:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.091 11:15:50 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:30.091 11:15:50 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:30.091 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.091 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.091 Malloc0 00:21:30.091 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.091 11:15:50 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:30.091 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.091 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.091 Delay0 00:21:30.091 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.091 11:15:50 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:30.091 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.091 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.091 [2024-12-13 11:15:50.645008] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20b42c0/0x1f22b00) succeed. 00:21:30.091 [2024-12-13 11:15:50.653285] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20b5860/0x1fa2b40) succeed. 00:21:30.349 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.349 11:15:50 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:30.349 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.349 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.349 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.349 11:15:50 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:30.349 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.349 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.349 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.349 11:15:50 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:30.349 11:15:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.349 11:15:50 -- common/autotest_common.sh@10 -- # set +x 00:21:30.349 [2024-12-13 11:15:50.787513] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:30.349 11:15:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.349 11:15:50 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:31.279 11:15:51 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:31.279 11:15:51 -- common/autotest_common.sh@1187 -- # local i=0 00:21:31.279 11:15:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.279 11:15:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:31.279 11:15:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:33.799 11:15:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:33.799 11:15:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:33.799 11:15:53 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:21:33.799 11:15:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:33.799 11:15:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:33.799 11:15:53 -- common/autotest_common.sh@1197 -- # return 0 00:21:33.799 11:15:53 -- target/initiator_timeout.sh@35 -- # fio_pid=1685882 00:21:33.799 11:15:53 -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:33.799 11:15:53 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:33.799 [global] 00:21:33.799 thread=1 00:21:33.799 invalidate=1 00:21:33.799 rw=write 00:21:33.799 time_based=1 00:21:33.799 runtime=60 00:21:33.799 ioengine=libaio 00:21:33.799 direct=1 00:21:33.799 bs=4096 00:21:33.799 iodepth=1 00:21:33.799 norandommap=0 00:21:33.799 numjobs=1 00:21:33.799 00:21:33.799 verify_dump=1 00:21:33.799 verify_backlog=512 00:21:33.799 verify_state_save=0 00:21:33.799 do_verify=1 00:21:33.799 verify=crc32c-intel 00:21:33.799 [job0] 00:21:33.799 filename=/dev/nvme0n1 00:21:33.799 Could not set queue depth (nvme0n1) 00:21:33.799 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:33.799 fio-3.35 00:21:33.799 Starting 1 thread 00:21:36.320 11:15:56 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:36.320 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.320 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:36.320 true 00:21:36.320 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.320 11:15:56 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:36.320 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.320 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:36.320 true 00:21:36.320 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.320 11:15:56 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:36.320 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.320 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:36.320 true 00:21:36.320 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.320 11:15:56 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:36.320 11:15:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.320 11:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:36.320 true 00:21:36.320 11:15:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.320 11:15:56 -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:39.591 11:15:59 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:39.591 11:15:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.591 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:21:39.591 true 00:21:39.591 11:15:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.591 11:15:59 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:39.591 11:15:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.591 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:21:39.591 true 00:21:39.591 11:15:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.591 11:15:59 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:39.591 11:15:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.591 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:21:39.591 true 00:21:39.591 11:15:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.592 11:15:59 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:39.592 11:15:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.592 11:15:59 -- common/autotest_common.sh@10 -- # set +x 00:21:39.592 true 00:21:39.592 11:15:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.592 11:15:59 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:39.592 11:15:59 -- target/initiator_timeout.sh@54 -- # wait 1685882 00:22:35.872 00:22:35.872 job0: (groupid=0, jobs=1): err= 0: pid=1686154: Fri Dec 13 11:16:54 2024 00:22:35.872 read: IOPS=1433, BW=5734KiB/s (5872kB/s)(336MiB/60000msec) 00:22:35.872 slat (usec): min=3, max=15929, avg= 6.61, stdev=69.61 00:22:35.872 clat (usec): min=64, max=271, avg=93.71, stdev= 5.85 00:22:35.872 lat (usec): min=80, max=16043, avg=100.31, stdev=70.02 00:22:35.872 clat percentiles (usec): 00:22:35.872 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:22:35.872 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 95], 00:22:35.872 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 101], 95.00th=[ 103], 00:22:35.872 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 115], 99.95th=[ 125], 00:22:35.872 | 99.99th=[ 186] 00:22:35.872 write: IOPS=1433, BW=5736KiB/s (5873kB/s)(336MiB/60000msec); 0 zone resets 00:22:35.872 slat (usec): min=4, max=935, avg= 8.20, stdev= 4.26 00:22:35.872 clat (usec): min=71, max=42511k, avg=585.25, stdev=144930.79 00:22:35.872 lat (usec): min=79, max=42511k, avg=593.46, stdev=144930.80 00:22:35.872 clat percentiles (usec): 00:22:35.872 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:22:35.872 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 93], 00:22:35.872 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 99], 95.00th=[ 101], 00:22:35.872 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 127], 00:22:35.872 | 99.99th=[ 215] 00:22:35.873 bw ( KiB/s): min= 4048, max=21712, per=100.00%, avg=19192.69, stdev=3074.26, samples=35 00:22:35.873 iops : min= 1012, max= 5428, avg=4798.17, stdev=768.57, samples=35 00:22:35.873 lat (usec) : 100=90.02%, 250=9.98%, 500=0.01% 00:22:35.873 lat (msec) : >=2000=0.01% 00:22:35.873 cpu : usr=1.45%, sys=2.52%, ctx=172062, majf=0, minf=131 00:22:35.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.873 issued rwts: total=86016,86035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:35.873 00:22:35.873 Run status group 0 (all jobs): 00:22:35.873 READ: bw=5734KiB/s (5872kB/s), 5734KiB/s-5734KiB/s (5872kB/s-5872kB/s), io=336MiB (352MB), run=60000-60000msec 00:22:35.873 WRITE: bw=5736KiB/s (5873kB/s), 5736KiB/s-5736KiB/s (5873kB/s-5873kB/s), io=336MiB (352MB), run=60000-60000msec 00:22:35.873 00:22:35.873 Disk stats (read/write): 00:22:35.873 nvme0n1: ios=85929/85579, merge=0/0, ticks=7503/7230, in_queue=14733, util=99.54% 00:22:35.873 11:16:54 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:35.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:35.873 11:16:55 -- common/autotest_common.sh@1208 -- # local i=0 00:22:35.873 11:16:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:35.873 11:16:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:35.873 11:16:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:35.873 11:16:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:35.873 11:16:55 -- common/autotest_common.sh@1220 -- # return 0 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:35.873 nvmf hotplug test: fio successful as expected 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.873 11:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.873 11:16:55 -- common/autotest_common.sh@10 -- # set +x 00:22:35.873 11:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:35.873 11:16:55 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:35.873 11:16:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:35.873 11:16:55 -- nvmf/common.sh@116 -- # sync 00:22:35.873 11:16:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:35.873 11:16:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:35.873 11:16:55 -- nvmf/common.sh@119 -- # set +e 00:22:35.873 11:16:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:35.873 11:16:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:35.873 rmmod nvme_rdma 00:22:35.873 rmmod nvme_fabrics 00:22:35.873 11:16:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:35.873 11:16:55 -- nvmf/common.sh@123 -- # set -e 00:22:35.873 11:16:55 -- nvmf/common.sh@124 -- # return 0 00:22:35.873 11:16:55 -- nvmf/common.sh@477 -- # '[' -n 1685270 ']' 00:22:35.873 11:16:55 -- nvmf/common.sh@478 -- # killprocess 1685270 00:22:35.873 11:16:55 -- common/autotest_common.sh@936 -- # '[' -z 1685270 ']' 00:22:35.873 11:16:55 -- common/autotest_common.sh@940 -- # kill -0 1685270 00:22:35.873 11:16:55 -- common/autotest_common.sh@941 -- # uname 00:22:35.873 11:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.873 11:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1685270 00:22:35.873 11:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:35.873 11:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:35.873 11:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1685270' 00:22:35.873 killing process with pid 1685270 00:22:35.873 11:16:55 -- common/autotest_common.sh@955 -- # kill 1685270 00:22:35.873 11:16:55 -- common/autotest_common.sh@960 -- # wait 1685270 00:22:35.873 11:16:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:35.873 11:16:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:35.873 00:22:35.873 real 1m11.268s 00:22:35.873 user 4m31.989s 00:22:35.873 sys 0m6.167s 00:22:35.873 11:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:35.873 11:16:55 -- common/autotest_common.sh@10 -- # set +x 00:22:35.873 ************************************ 00:22:35.873 END TEST nvmf_initiator_timeout 00:22:35.873 ************************************ 00:22:35.873 11:16:55 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:22:35.873 11:16:55 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:22:35.873 11:16:55 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:22:35.873 11:16:55 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:35.873 11:16:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:35.873 11:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.873 11:16:55 -- common/autotest_common.sh@10 -- # set +x 00:22:35.873 ************************************ 00:22:35.873 START TEST nvmf_shutdown 00:22:35.873 ************************************ 00:22:35.873 11:16:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:35.873 * Looking for test storage... 00:22:35.873 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:35.873 11:16:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:35.873 11:16:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:35.873 11:16:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:35.873 11:16:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:35.873 11:16:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:35.873 11:16:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:35.873 11:16:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:35.873 11:16:55 -- scripts/common.sh@335 -- # IFS=.-: 00:22:35.873 11:16:55 -- scripts/common.sh@335 -- # read -ra ver1 00:22:35.873 11:16:55 -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.873 11:16:55 -- scripts/common.sh@336 -- # read -ra ver2 00:22:35.873 11:16:55 -- scripts/common.sh@337 -- # local 'op=<' 00:22:35.873 11:16:55 -- scripts/common.sh@339 -- # ver1_l=2 00:22:35.873 11:16:55 -- scripts/common.sh@340 -- # ver2_l=1 00:22:35.873 11:16:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:35.873 11:16:55 -- scripts/common.sh@343 -- # case "$op" in 00:22:35.873 11:16:55 -- scripts/common.sh@344 -- # : 1 00:22:35.873 11:16:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:35.873 11:16:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.873 11:16:55 -- scripts/common.sh@364 -- # decimal 1 00:22:35.873 11:16:55 -- scripts/common.sh@352 -- # local d=1 00:22:35.873 11:16:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.873 11:16:55 -- scripts/common.sh@354 -- # echo 1 00:22:35.873 11:16:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:35.873 11:16:55 -- scripts/common.sh@365 -- # decimal 2 00:22:35.873 11:16:55 -- scripts/common.sh@352 -- # local d=2 00:22:35.873 11:16:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.873 11:16:55 -- scripts/common.sh@354 -- # echo 2 00:22:35.873 11:16:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:35.873 11:16:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:35.873 11:16:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:35.873 11:16:55 -- scripts/common.sh@367 -- # return 0 00:22:35.873 11:16:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.873 11:16:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:35.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.873 --rc genhtml_branch_coverage=1 00:22:35.873 --rc genhtml_function_coverage=1 00:22:35.873 --rc genhtml_legend=1 00:22:35.873 --rc geninfo_all_blocks=1 00:22:35.873 --rc geninfo_unexecuted_blocks=1 00:22:35.873 00:22:35.873 ' 00:22:35.873 11:16:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:35.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.873 --rc genhtml_branch_coverage=1 00:22:35.873 --rc genhtml_function_coverage=1 00:22:35.873 --rc genhtml_legend=1 00:22:35.873 --rc geninfo_all_blocks=1 00:22:35.873 --rc geninfo_unexecuted_blocks=1 00:22:35.873 00:22:35.873 ' 00:22:35.873 11:16:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:35.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.873 --rc genhtml_branch_coverage=1 00:22:35.873 --rc genhtml_function_coverage=1 00:22:35.873 --rc genhtml_legend=1 00:22:35.873 --rc geninfo_all_blocks=1 00:22:35.873 --rc geninfo_unexecuted_blocks=1 00:22:35.873 00:22:35.873 ' 00:22:35.873 11:16:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:35.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.873 --rc genhtml_branch_coverage=1 00:22:35.873 --rc genhtml_function_coverage=1 00:22:35.873 --rc genhtml_legend=1 00:22:35.873 --rc geninfo_all_blocks=1 00:22:35.873 --rc geninfo_unexecuted_blocks=1 00:22:35.873 00:22:35.873 ' 00:22:35.873 11:16:55 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.873 11:16:55 -- nvmf/common.sh@7 -- # uname -s 00:22:35.873 11:16:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.873 11:16:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.873 11:16:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.873 11:16:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.873 11:16:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.873 11:16:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.873 11:16:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.873 11:16:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.873 11:16:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.873 11:16:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.873 11:16:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:35.874 11:16:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:35.874 11:16:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.874 11:16:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.874 11:16:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.874 11:16:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:35.874 11:16:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.874 11:16:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.874 11:16:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.874 11:16:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.874 11:16:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.874 11:16:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.874 11:16:55 -- paths/export.sh@5 -- # export PATH 00:22:35.874 11:16:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.874 11:16:55 -- nvmf/common.sh@46 -- # : 0 00:22:35.874 11:16:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:35.874 11:16:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:35.874 11:16:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:35.874 11:16:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.874 11:16:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.874 11:16:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:35.874 11:16:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:35.874 11:16:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:35.874 11:16:55 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.874 11:16:55 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.874 11:16:55 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:35.874 11:16:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:35.874 11:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.874 11:16:55 -- common/autotest_common.sh@10 -- # set +x 00:22:35.874 ************************************ 00:22:35.874 START TEST nvmf_shutdown_tc1 00:22:35.874 ************************************ 00:22:35.874 11:16:55 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:22:35.874 11:16:55 -- target/shutdown.sh@74 -- # starttarget 00:22:35.874 11:16:55 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:35.874 11:16:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:35.874 11:16:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.874 11:16:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:35.874 11:16:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:35.874 11:16:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:35.874 11:16:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.874 11:16:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.874 11:16:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.874 11:16:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:35.874 11:16:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:35.874 11:16:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:35.874 11:16:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.134 11:17:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:41.134 11:17:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:41.134 11:17:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:41.134 11:17:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:41.134 11:17:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:41.134 11:17:01 -- nvmf/common.sh@294 -- # net_devs=() 00:22:41.134 11:17:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@295 -- # e810=() 00:22:41.134 11:17:01 -- nvmf/common.sh@295 -- # local -ga e810 00:22:41.134 11:17:01 -- nvmf/common.sh@296 -- # x722=() 00:22:41.134 11:17:01 -- nvmf/common.sh@296 -- # local -ga x722 00:22:41.134 11:17:01 -- nvmf/common.sh@297 -- # mlx=() 00:22:41.134 11:17:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:41.134 11:17:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.134 11:17:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:41.134 11:17:01 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:41.134 11:17:01 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:41.134 11:17:01 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:41.134 11:17:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:41.134 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:41.134 11:17:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:41.134 11:17:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:41.134 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:41.134 11:17:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:41.134 11:17:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.134 11:17:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.134 11:17:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:41.134 Found net devices under 0000:18:00.0: mlx_0_0 00:22:41.134 11:17:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.134 11:17:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.134 11:17:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.134 11:17:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:41.134 Found net devices under 0000:18:00.1: mlx_0_1 00:22:41.134 11:17:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.134 11:17:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:41.134 11:17:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:41.134 11:17:01 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:41.134 11:17:01 -- nvmf/common.sh@57 -- # uname 00:22:41.134 11:17:01 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:41.134 11:17:01 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:41.134 11:17:01 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:41.134 11:17:01 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:41.134 11:17:01 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:41.134 11:17:01 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:41.134 11:17:01 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:41.134 11:17:01 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:41.134 11:17:01 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:41.134 11:17:01 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:41.134 11:17:01 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:41.134 11:17:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:41.134 11:17:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.134 11:17:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:41.134 11:17:01 -- nvmf/common.sh@104 -- # continue 2 00:22:41.134 11:17:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:41.134 11:17:01 -- nvmf/common.sh@104 -- # continue 2 00:22:41.134 11:17:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:41.134 11:17:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:41.134 11:17:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:41.134 11:17:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:41.134 11:17:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:41.134 11:17:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:41.134 11:17:01 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:41.134 11:17:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:41.134 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.134 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:41.134 altname enp24s0f0np0 00:22:41.134 altname ens785f0np0 00:22:41.134 inet 192.168.100.8/24 scope global mlx_0_0 00:22:41.134 valid_lft forever preferred_lft forever 00:22:41.134 11:17:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:41.134 11:17:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:41.134 11:17:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:41.134 11:17:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:41.134 11:17:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:41.134 11:17:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:41.134 11:17:01 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:41.134 11:17:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:41.134 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.134 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:41.134 altname enp24s0f1np1 00:22:41.134 altname ens785f1np1 00:22:41.134 inet 192.168.100.9/24 scope global mlx_0_1 00:22:41.134 valid_lft forever preferred_lft forever 00:22:41.134 11:17:01 -- nvmf/common.sh@410 -- # return 0 00:22:41.134 11:17:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:41.134 11:17:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:41.134 11:17:01 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:41.134 11:17:01 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:41.134 11:17:01 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:41.134 11:17:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:41.134 11:17:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:41.134 11:17:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.134 11:17:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:41.134 11:17:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.134 11:17:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.135 11:17:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:41.135 11:17:01 -- nvmf/common.sh@104 -- # continue 2 00:22:41.135 11:17:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:41.135 11:17:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.135 11:17:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.135 11:17:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.135 11:17:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.135 11:17:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:41.135 11:17:01 -- nvmf/common.sh@104 -- # continue 2 00:22:41.135 11:17:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:41.135 11:17:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:41.135 11:17:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:41.135 11:17:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:41.135 11:17:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:41.135 11:17:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:41.135 11:17:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:41.135 11:17:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:41.135 11:17:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:41.135 11:17:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:41.135 11:17:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:41.135 11:17:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:41.135 11:17:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:41.135 192.168.100.9' 00:22:41.135 11:17:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:41.135 192.168.100.9' 00:22:41.135 11:17:01 -- nvmf/common.sh@445 -- # head -n 1 00:22:41.135 11:17:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:41.135 11:17:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:41.135 192.168.100.9' 00:22:41.135 11:17:01 -- nvmf/common.sh@446 -- # tail -n +2 00:22:41.135 11:17:01 -- nvmf/common.sh@446 -- # head -n 1 00:22:41.135 11:17:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:41.135 11:17:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:41.135 11:17:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:41.135 11:17:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:41.135 11:17:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:41.135 11:17:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:41.135 11:17:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:41.135 11:17:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:41.135 11:17:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.135 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.135 11:17:01 -- nvmf/common.sh@469 -- # nvmfpid=1700303 00:22:41.135 11:17:01 -- nvmf/common.sh@470 -- # waitforlisten 1700303 00:22:41.135 11:17:01 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:41.135 11:17:01 -- common/autotest_common.sh@829 -- # '[' -z 1700303 ']' 00:22:41.135 11:17:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.135 11:17:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.135 11:17:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.135 11:17:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.135 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.392 [2024-12-13 11:17:01.703759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:41.392 [2024-12-13 11:17:01.703802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.392 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.392 [2024-12-13 11:17:01.754943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.392 [2024-12-13 11:17:01.828059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:41.392 [2024-12-13 11:17:01.828159] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.392 [2024-12-13 11:17:01.828167] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.392 [2024-12-13 11:17:01.828174] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.392 [2024-12-13 11:17:01.828213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.392 [2024-12-13 11:17:01.828297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.392 [2024-12-13 11:17:01.828406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.392 [2024-12-13 11:17:01.828407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:41.956 11:17:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.956 11:17:02 -- common/autotest_common.sh@862 -- # return 0 00:22:41.956 11:17:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:41.956 11:17:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.956 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:42.213 11:17:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.213 11:17:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:42.213 11:17:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.213 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:42.213 [2024-12-13 11:17:02.566694] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b27c50/0x1b2c140) succeed. 00:22:42.213 [2024-12-13 11:17:02.574813] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b29240/0x1b6d7e0) succeed. 00:22:42.213 11:17:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.213 11:17:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:42.213 11:17:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:42.213 11:17:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:42.213 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:42.213 11:17:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.213 11:17:02 -- target/shutdown.sh@28 -- # cat 00:22:42.213 11:17:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:42.213 11:17:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.213 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:42.213 Malloc1 00:22:42.213 [2024-12-13 11:17:02.772993] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:42.470 Malloc2 00:22:42.470 Malloc3 00:22:42.470 Malloc4 00:22:42.470 Malloc5 00:22:42.470 Malloc6 00:22:42.470 Malloc7 00:22:42.728 Malloc8 00:22:42.728 Malloc9 00:22:42.728 Malloc10 00:22:42.728 11:17:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.728 11:17:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:42.728 11:17:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.728 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:22:42.728 11:17:03 -- target/shutdown.sh@78 -- # perfpid=1700637 00:22:42.728 11:17:03 -- target/shutdown.sh@79 -- # waitforlisten 1700637 /var/tmp/bdevperf.sock 00:22:42.728 11:17:03 -- common/autotest_common.sh@829 -- # '[' -z 1700637 ']' 00:22:42.728 11:17:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.728 11:17:03 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:42.728 11:17:03 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:42.728 11:17:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.728 11:17:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.728 11:17:03 -- nvmf/common.sh@520 -- # config=() 00:22:42.728 11:17:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.728 11:17:03 -- nvmf/common.sh@520 -- # local subsystem config 00:22:42.728 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:22:42.728 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.728 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.728 { 00:22:42.728 "params": { 00:22:42.728 "name": "Nvme$subsystem", 00:22:42.728 "trtype": "$TEST_TRANSPORT", 00:22:42.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.728 "adrfam": "ipv4", 00:22:42.728 "trsvcid": "$NVMF_PORT", 00:22:42.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 [2024-12-13 11:17:03.240511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:42.729 [2024-12-13 11:17:03.240557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 11:17:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:42.729 { 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme$subsystem", 00:22:42.729 "trtype": "$TEST_TRANSPORT", 00:22:42.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "$NVMF_PORT", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.729 "hdgst": ${hdgst:-false}, 00:22:42.729 "ddgst": ${ddgst:-false} 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 } 00:22:42.729 EOF 00:22:42.729 )") 00:22:42.729 11:17:03 -- nvmf/common.sh@542 -- # cat 00:22:42.729 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.729 11:17:03 -- nvmf/common.sh@544 -- # jq . 00:22:42.729 11:17:03 -- nvmf/common.sh@545 -- # IFS=, 00:22:42.729 11:17:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme1", 00:22:42.729 "trtype": "rdma", 00:22:42.729 "traddr": "192.168.100.8", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "4420", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.729 "hdgst": false, 00:22:42.729 "ddgst": false 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 },{ 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme2", 00:22:42.729 "trtype": "rdma", 00:22:42.729 "traddr": "192.168.100.8", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "4420", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:42.729 "hdgst": false, 00:22:42.729 "ddgst": false 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 },{ 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme3", 00:22:42.729 "trtype": "rdma", 00:22:42.729 "traddr": "192.168.100.8", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "4420", 00:22:42.729 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:42.729 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:42.729 "hdgst": false, 00:22:42.729 "ddgst": false 00:22:42.729 }, 00:22:42.729 "method": "bdev_nvme_attach_controller" 00:22:42.729 },{ 00:22:42.729 "params": { 00:22:42.729 "name": "Nvme4", 00:22:42.729 "trtype": "rdma", 00:22:42.729 "traddr": "192.168.100.8", 00:22:42.729 "adrfam": "ipv4", 00:22:42.729 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 },{ 00:22:42.730 "params": { 00:22:42.730 "name": "Nvme5", 00:22:42.730 "trtype": "rdma", 00:22:42.730 "traddr": "192.168.100.8", 00:22:42.730 "adrfam": "ipv4", 00:22:42.730 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 },{ 00:22:42.730 "params": { 00:22:42.730 "name": "Nvme6", 00:22:42.730 "trtype": "rdma", 00:22:42.730 "traddr": "192.168.100.8", 00:22:42.730 "adrfam": "ipv4", 00:22:42.730 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 },{ 00:22:42.730 "params": { 00:22:42.730 "name": "Nvme7", 00:22:42.730 "trtype": "rdma", 00:22:42.730 "traddr": "192.168.100.8", 00:22:42.730 "adrfam": "ipv4", 00:22:42.730 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 },{ 00:22:42.730 "params": { 00:22:42.730 "name": "Nvme8", 00:22:42.730 "trtype": "rdma", 00:22:42.730 "traddr": "192.168.100.8", 00:22:42.730 "adrfam": "ipv4", 00:22:42.730 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 },{ 00:22:42.730 "params": { 00:22:42.730 "name": "Nvme9", 00:22:42.730 "trtype": "rdma", 00:22:42.730 "traddr": "192.168.100.8", 00:22:42.730 "adrfam": "ipv4", 00:22:42.730 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 },{ 00:22:42.730 "params": { 00:22:42.730 "name": "Nvme10", 00:22:42.730 "trtype": "rdma", 00:22:42.730 "traddr": "192.168.100.8", 00:22:42.730 "adrfam": "ipv4", 00:22:42.730 "trsvcid": "4420", 00:22:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:42.730 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:42.730 "hdgst": false, 00:22:42.730 "ddgst": false 00:22:42.730 }, 00:22:42.730 "method": "bdev_nvme_attach_controller" 00:22:42.730 }' 00:22:42.730 [2024-12-13 11:17:03.295428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.987 [2024-12-13 11:17:03.360858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.354 11:17:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.354 11:17:04 -- common/autotest_common.sh@862 -- # return 0 00:22:44.354 11:17:04 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:44.354 11:17:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.354 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:22:44.354 11:17:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.354 11:17:04 -- target/shutdown.sh@83 -- # kill -9 1700637 00:22:44.354 11:17:04 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:44.354 11:17:04 -- target/shutdown.sh@87 -- # sleep 1 00:22:45.283 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1700637 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:45.283 11:17:05 -- target/shutdown.sh@88 -- # kill -0 1700303 00:22:45.283 11:17:05 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:45.283 11:17:05 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.283 11:17:05 -- nvmf/common.sh@520 -- # config=() 00:22:45.283 11:17:05 -- nvmf/common.sh@520 -- # local subsystem config 00:22:45.283 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.283 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.283 { 00:22:45.283 "params": { 00:22:45.283 "name": "Nvme$subsystem", 00:22:45.283 "trtype": "$TEST_TRANSPORT", 00:22:45.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.283 "adrfam": "ipv4", 00:22:45.283 "trsvcid": "$NVMF_PORT", 00:22:45.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.283 "hdgst": ${hdgst:-false}, 00:22:45.283 "ddgst": ${ddgst:-false} 00:22:45.283 }, 00:22:45.283 "method": "bdev_nvme_attach_controller" 00:22:45.283 } 00:22:45.283 EOF 00:22:45.283 )") 00:22:45.283 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.283 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.283 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 [2024-12-13 11:17:05.758581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:45.284 [2024-12-13 11:17:05.758626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701173 ] 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:45.284 { 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme$subsystem", 00:22:45.284 "trtype": "$TEST_TRANSPORT", 00:22:45.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "$NVMF_PORT", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.284 "hdgst": ${hdgst:-false}, 00:22:45.284 "ddgst": ${ddgst:-false} 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 } 00:22:45.284 EOF 00:22:45.284 )") 00:22:45.284 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.284 11:17:05 -- nvmf/common.sh@542 -- # cat 00:22:45.284 11:17:05 -- nvmf/common.sh@544 -- # jq . 00:22:45.284 11:17:05 -- nvmf/common.sh@545 -- # IFS=, 00:22:45.284 11:17:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme1", 00:22:45.284 "trtype": "rdma", 00:22:45.284 "traddr": "192.168.100.8", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "4420", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.284 "hdgst": false, 00:22:45.284 "ddgst": false 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 },{ 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme2", 00:22:45.284 "trtype": "rdma", 00:22:45.284 "traddr": "192.168.100.8", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "4420", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.284 "hdgst": false, 00:22:45.284 "ddgst": false 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 },{ 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme3", 00:22:45.284 "trtype": "rdma", 00:22:45.284 "traddr": "192.168.100.8", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "4420", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.284 "hdgst": false, 00:22:45.284 "ddgst": false 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 },{ 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme4", 00:22:45.284 "trtype": "rdma", 00:22:45.284 "traddr": "192.168.100.8", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "4420", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.284 "hdgst": false, 00:22:45.284 "ddgst": false 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 },{ 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme5", 00:22:45.284 "trtype": "rdma", 00:22:45.284 "traddr": "192.168.100.8", 00:22:45.284 "adrfam": "ipv4", 00:22:45.284 "trsvcid": "4420", 00:22:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.284 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.284 "hdgst": false, 00:22:45.284 "ddgst": false 00:22:45.284 }, 00:22:45.284 "method": "bdev_nvme_attach_controller" 00:22:45.284 },{ 00:22:45.284 "params": { 00:22:45.284 "name": "Nvme6", 00:22:45.284 "trtype": "rdma", 00:22:45.284 "traddr": "192.168.100.8", 00:22:45.284 "adrfam": "ipv4", 00:22:45.285 "trsvcid": "4420", 00:22:45.285 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.285 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.285 "hdgst": false, 00:22:45.285 "ddgst": false 00:22:45.285 }, 00:22:45.285 "method": "bdev_nvme_attach_controller" 00:22:45.285 },{ 00:22:45.285 "params": { 00:22:45.285 "name": "Nvme7", 00:22:45.285 "trtype": "rdma", 00:22:45.285 "traddr": "192.168.100.8", 00:22:45.285 "adrfam": "ipv4", 00:22:45.285 "trsvcid": "4420", 00:22:45.285 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.285 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.285 "hdgst": false, 00:22:45.285 "ddgst": false 00:22:45.285 }, 00:22:45.285 "method": "bdev_nvme_attach_controller" 00:22:45.285 },{ 00:22:45.285 "params": { 00:22:45.285 "name": "Nvme8", 00:22:45.285 "trtype": "rdma", 00:22:45.285 "traddr": "192.168.100.8", 00:22:45.285 "adrfam": "ipv4", 00:22:45.285 "trsvcid": "4420", 00:22:45.285 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.285 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.285 "hdgst": false, 00:22:45.285 "ddgst": false 00:22:45.285 }, 00:22:45.285 "method": "bdev_nvme_attach_controller" 00:22:45.285 },{ 00:22:45.285 "params": { 00:22:45.285 "name": "Nvme9", 00:22:45.285 "trtype": "rdma", 00:22:45.285 "traddr": "192.168.100.8", 00:22:45.285 "adrfam": "ipv4", 00:22:45.285 "trsvcid": "4420", 00:22:45.285 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.285 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.285 "hdgst": false, 00:22:45.285 "ddgst": false 00:22:45.285 }, 00:22:45.285 "method": "bdev_nvme_attach_controller" 00:22:45.285 },{ 00:22:45.285 "params": { 00:22:45.285 "name": "Nvme10", 00:22:45.285 "trtype": "rdma", 00:22:45.285 "traddr": "192.168.100.8", 00:22:45.285 "adrfam": "ipv4", 00:22:45.285 "trsvcid": "4420", 00:22:45.285 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.285 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.285 "hdgst": false, 00:22:45.285 "ddgst": false 00:22:45.285 }, 00:22:45.285 "method": "bdev_nvme_attach_controller" 00:22:45.285 }' 00:22:45.285 [2024-12-13 11:17:05.812819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.542 [2024-12-13 11:17:05.880077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.473 Running I/O for 1 seconds... 00:22:47.404 00:22:47.404 Latency(us) 00:22:47.404 [2024-12-13T10:17:07.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme1n1 : 1.11 758.76 47.42 0.00 0.00 83374.81 7864.32 116508.44 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme2n1 : 1.12 774.25 48.39 0.00 0.00 81191.23 7912.87 72235.24 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme3n1 : 1.12 773.61 48.35 0.00 0.00 80759.66 8009.96 70681.79 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme4n1 : 1.12 772.97 48.31 0.00 0.00 80377.71 8058.50 69516.71 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme5n1 : 1.12 772.33 48.27 0.00 0.00 79996.59 8107.05 67963.26 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme6n1 : 1.12 771.70 48.23 0.00 0.00 79619.81 8155.59 67963.26 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme7n1 : 1.12 771.06 48.19 0.00 0.00 79241.28 8252.68 69516.71 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme8n1 : 1.12 770.44 48.15 0.00 0.00 78868.31 8301.23 71070.15 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme9n1 : 1.12 769.82 48.11 0.00 0.00 78486.86 8349.77 72235.24 00:22:47.404 [2024-12-13T10:17:07.973Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.404 Verification LBA range: start 0x0 length 0x400 00:22:47.404 Nvme10n1 : 1.12 583.97 36.50 0.00 0.00 102924.01 8204.14 321563.31 00:22:47.404 [2024-12-13T10:17:07.974Z] =================================================================================================================== 00:22:47.405 [2024-12-13T10:17:07.974Z] Total : 7518.92 469.93 0.00 0.00 81976.83 7864.32 321563.31 00:22:47.662 11:17:08 -- target/shutdown.sh@93 -- # stoptarget 00:22:47.662 11:17:08 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:47.662 11:17:08 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.662 11:17:08 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.662 11:17:08 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:47.662 11:17:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:47.662 11:17:08 -- nvmf/common.sh@116 -- # sync 00:22:47.662 11:17:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:47.662 11:17:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:47.662 11:17:08 -- nvmf/common.sh@119 -- # set +e 00:22:47.662 11:17:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:47.662 11:17:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:47.662 rmmod nvme_rdma 00:22:47.662 rmmod nvme_fabrics 00:22:47.662 11:17:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:47.662 11:17:08 -- nvmf/common.sh@123 -- # set -e 00:22:47.662 11:17:08 -- nvmf/common.sh@124 -- # return 0 00:22:47.662 11:17:08 -- nvmf/common.sh@477 -- # '[' -n 1700303 ']' 00:22:47.662 11:17:08 -- nvmf/common.sh@478 -- # killprocess 1700303 00:22:47.662 11:17:08 -- common/autotest_common.sh@936 -- # '[' -z 1700303 ']' 00:22:47.662 11:17:08 -- common/autotest_common.sh@940 -- # kill -0 1700303 00:22:47.662 11:17:08 -- common/autotest_common.sh@941 -- # uname 00:22:47.662 11:17:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.919 11:17:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1700303 00:22:47.919 11:17:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.919 11:17:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.919 11:17:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1700303' 00:22:47.919 killing process with pid 1700303 00:22:47.919 11:17:08 -- common/autotest_common.sh@955 -- # kill 1700303 00:22:47.919 11:17:08 -- common/autotest_common.sh@960 -- # wait 1700303 00:22:48.177 11:17:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:48.177 11:17:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:48.177 00:22:48.177 real 0m12.890s 00:22:48.177 user 0m32.638s 00:22:48.177 sys 0m5.351s 00:22:48.177 11:17:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:48.177 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:48.177 ************************************ 00:22:48.177 END TEST nvmf_shutdown_tc1 00:22:48.177 ************************************ 00:22:48.435 11:17:08 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:48.435 11:17:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:48.435 11:17:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:48.435 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:48.435 ************************************ 00:22:48.435 START TEST nvmf_shutdown_tc2 00:22:48.435 ************************************ 00:22:48.435 11:17:08 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:22:48.435 11:17:08 -- target/shutdown.sh@98 -- # starttarget 00:22:48.435 11:17:08 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:48.435 11:17:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:48.435 11:17:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.435 11:17:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:48.435 11:17:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:48.435 11:17:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:48.435 11:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.435 11:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.435 11:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.435 11:17:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:48.435 11:17:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:48.435 11:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:48.435 11:17:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:48.435 11:17:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:48.435 11:17:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:48.435 11:17:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:48.435 11:17:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:48.435 11:17:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:48.435 11:17:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:48.435 11:17:08 -- nvmf/common.sh@294 -- # net_devs=() 00:22:48.435 11:17:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:48.435 11:17:08 -- nvmf/common.sh@295 -- # e810=() 00:22:48.435 11:17:08 -- nvmf/common.sh@295 -- # local -ga e810 00:22:48.435 11:17:08 -- nvmf/common.sh@296 -- # x722=() 00:22:48.435 11:17:08 -- nvmf/common.sh@296 -- # local -ga x722 00:22:48.435 11:17:08 -- nvmf/common.sh@297 -- # mlx=() 00:22:48.435 11:17:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:48.435 11:17:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.435 11:17:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:48.435 11:17:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:48.435 11:17:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:48.435 11:17:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:48.435 11:17:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:48.435 11:17:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:48.435 11:17:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:48.435 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:48.435 11:17:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:48.435 11:17:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:48.435 11:17:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:48.435 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:48.435 11:17:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:48.435 11:17:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:48.435 11:17:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:48.435 11:17:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:48.435 11:17:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.435 11:17:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:48.435 11:17:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.435 11:17:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:48.435 Found net devices under 0000:18:00.0: mlx_0_0 00:22:48.435 11:17:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.435 11:17:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.436 11:17:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:48.436 11:17:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.436 11:17:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:48.436 Found net devices under 0000:18:00.1: mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.436 11:17:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:48.436 11:17:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:48.436 11:17:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:48.436 11:17:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:48.436 11:17:08 -- nvmf/common.sh@57 -- # uname 00:22:48.436 11:17:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:48.436 11:17:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:48.436 11:17:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:48.436 11:17:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:48.436 11:17:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:48.436 11:17:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:48.436 11:17:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:48.436 11:17:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:48.436 11:17:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:48.436 11:17:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:48.436 11:17:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:48.436 11:17:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:48.436 11:17:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:48.436 11:17:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:48.436 11:17:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:48.436 11:17:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:48.436 11:17:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@104 -- # continue 2 00:22:48.436 11:17:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@104 -- # continue 2 00:22:48.436 11:17:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:48.436 11:17:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:48.436 11:17:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:48.436 11:17:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:48.436 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:48.436 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:48.436 altname enp24s0f0np0 00:22:48.436 altname ens785f0np0 00:22:48.436 inet 192.168.100.8/24 scope global mlx_0_0 00:22:48.436 valid_lft forever preferred_lft forever 00:22:48.436 11:17:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:48.436 11:17:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:48.436 11:17:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:48.436 11:17:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:48.436 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:48.436 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:48.436 altname enp24s0f1np1 00:22:48.436 altname ens785f1np1 00:22:48.436 inet 192.168.100.9/24 scope global mlx_0_1 00:22:48.436 valid_lft forever preferred_lft forever 00:22:48.436 11:17:08 -- nvmf/common.sh@410 -- # return 0 00:22:48.436 11:17:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:48.436 11:17:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:48.436 11:17:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:48.436 11:17:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:48.436 11:17:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:48.436 11:17:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:48.436 11:17:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:48.436 11:17:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:48.436 11:17:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:48.436 11:17:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@104 -- # continue 2 00:22:48.436 11:17:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:48.436 11:17:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:48.436 11:17:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@104 -- # continue 2 00:22:48.436 11:17:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:48.436 11:17:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:48.436 11:17:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:48.436 11:17:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:48.436 11:17:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:48.436 11:17:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:48.436 192.168.100.9' 00:22:48.436 11:17:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:48.436 192.168.100.9' 00:22:48.436 11:17:08 -- nvmf/common.sh@445 -- # head -n 1 00:22:48.436 11:17:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:48.436 11:17:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:48.436 192.168.100.9' 00:22:48.436 11:17:08 -- nvmf/common.sh@446 -- # tail -n +2 00:22:48.436 11:17:08 -- nvmf/common.sh@446 -- # head -n 1 00:22:48.436 11:17:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:48.436 11:17:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:48.436 11:17:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:48.436 11:17:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:48.436 11:17:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:48.436 11:17:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:48.694 11:17:09 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:48.694 11:17:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:48.694 11:17:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.694 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:22:48.694 11:17:09 -- nvmf/common.sh@469 -- # nvmfpid=1701811 00:22:48.694 11:17:09 -- nvmf/common.sh@470 -- # waitforlisten 1701811 00:22:48.694 11:17:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:48.694 11:17:09 -- common/autotest_common.sh@829 -- # '[' -z 1701811 ']' 00:22:48.694 11:17:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.694 11:17:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.694 11:17:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.694 11:17:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.694 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:22:48.694 [2024-12-13 11:17:09.059302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.694 [2024-12-13 11:17:09.059346] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.694 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.694 [2024-12-13 11:17:09.109634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.694 [2024-12-13 11:17:09.181232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.694 [2024-12-13 11:17:09.181333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.694 [2024-12-13 11:17:09.181341] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.694 [2024-12-13 11:17:09.181347] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.694 [2024-12-13 11:17:09.181439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.694 [2024-12-13 11:17:09.181519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.694 [2024-12-13 11:17:09.181625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.694 [2024-12-13 11:17:09.181626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.624 11:17:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.624 11:17:09 -- common/autotest_common.sh@862 -- # return 0 00:22:49.624 11:17:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:49.624 11:17:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.624 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:22:49.624 11:17:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.624 11:17:09 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:49.624 11:17:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.624 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:22:49.624 [2024-12-13 11:17:09.911622] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20a8c50/0x20ad140) succeed. 00:22:49.624 [2024-12-13 11:17:09.919739] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20aa240/0x20ee7e0) succeed. 00:22:49.624 11:17:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.624 11:17:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:49.624 11:17:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:49.624 11:17:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:49.624 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:22:49.624 11:17:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:49.624 11:17:10 -- target/shutdown.sh@28 -- # cat 00:22:49.624 11:17:10 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:49.624 11:17:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.624 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:22:49.624 Malloc1 00:22:49.624 [2024-12-13 11:17:10.118233] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:49.624 Malloc2 00:22:49.624 Malloc3 00:22:49.881 Malloc4 00:22:49.881 Malloc5 00:22:49.881 Malloc6 00:22:49.881 Malloc7 00:22:49.881 Malloc8 00:22:49.881 Malloc9 00:22:50.139 Malloc10 00:22:50.139 11:17:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.139 11:17:10 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:50.139 11:17:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.139 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.139 11:17:10 -- target/shutdown.sh@102 -- # perfpid=1702129 00:22:50.139 11:17:10 -- target/shutdown.sh@103 -- # waitforlisten 1702129 /var/tmp/bdevperf.sock 00:22:50.139 11:17:10 -- common/autotest_common.sh@829 -- # '[' -z 1702129 ']' 00:22:50.139 11:17:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.139 11:17:10 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.139 11:17:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.139 11:17:10 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.139 11:17:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.139 11:17:10 -- nvmf/common.sh@520 -- # config=() 00:22:50.139 11:17:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.139 11:17:10 -- nvmf/common.sh@520 -- # local subsystem config 00:22:50.139 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.139 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.139 { 00:22:50.139 "params": { 00:22:50.139 "name": "Nvme$subsystem", 00:22:50.139 "trtype": "$TEST_TRANSPORT", 00:22:50.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.139 "adrfam": "ipv4", 00:22:50.139 "trsvcid": "$NVMF_PORT", 00:22:50.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.139 "hdgst": ${hdgst:-false}, 00:22:50.139 "ddgst": ${ddgst:-false} 00:22:50.139 }, 00:22:50.139 "method": "bdev_nvme_attach_controller" 00:22:50.139 } 00:22:50.139 EOF 00:22:50.139 )") 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.139 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.139 { 00:22:50.139 "params": { 00:22:50.139 "name": "Nvme$subsystem", 00:22:50.139 "trtype": "$TEST_TRANSPORT", 00:22:50.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.139 "adrfam": "ipv4", 00:22:50.139 "trsvcid": "$NVMF_PORT", 00:22:50.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.139 "hdgst": ${hdgst:-false}, 00:22:50.139 "ddgst": ${ddgst:-false} 00:22:50.139 }, 00:22:50.139 "method": "bdev_nvme_attach_controller" 00:22:50.139 } 00:22:50.139 EOF 00:22:50.139 )") 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.139 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.139 { 00:22:50.139 "params": { 00:22:50.139 "name": "Nvme$subsystem", 00:22:50.139 "trtype": "$TEST_TRANSPORT", 00:22:50.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.139 "adrfam": "ipv4", 00:22:50.139 "trsvcid": "$NVMF_PORT", 00:22:50.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.139 "hdgst": ${hdgst:-false}, 00:22:50.139 "ddgst": ${ddgst:-false} 00:22:50.139 }, 00:22:50.139 "method": "bdev_nvme_attach_controller" 00:22:50.139 } 00:22:50.139 EOF 00:22:50.139 )") 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.139 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.139 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.139 { 00:22:50.139 "params": { 00:22:50.139 "name": "Nvme$subsystem", 00:22:50.139 "trtype": "$TEST_TRANSPORT", 00:22:50.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.140 { 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme$subsystem", 00:22:50.140 "trtype": "$TEST_TRANSPORT", 00:22:50.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.140 { 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme$subsystem", 00:22:50.140 "trtype": "$TEST_TRANSPORT", 00:22:50.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 [2024-12-13 11:17:10.592722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:50.140 [2024-12-13 11:17:10.592766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702129 ] 00:22:50.140 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.140 { 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme$subsystem", 00:22:50.140 "trtype": "$TEST_TRANSPORT", 00:22:50.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.140 { 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme$subsystem", 00:22:50.140 "trtype": "$TEST_TRANSPORT", 00:22:50.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.140 { 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme$subsystem", 00:22:50.140 "trtype": "$TEST_TRANSPORT", 00:22:50.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 11:17:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.140 { 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme$subsystem", 00:22:50.140 "trtype": "$TEST_TRANSPORT", 00:22:50.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "$NVMF_PORT", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.140 "hdgst": ${hdgst:-false}, 00:22:50.140 "ddgst": ${ddgst:-false} 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 } 00:22:50.140 EOF 00:22:50.140 )") 00:22:50.140 11:17:10 -- nvmf/common.sh@542 -- # cat 00:22:50.140 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.140 11:17:10 -- nvmf/common.sh@544 -- # jq . 00:22:50.140 11:17:10 -- nvmf/common.sh@545 -- # IFS=, 00:22:50.140 11:17:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme1", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme2", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme3", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme4", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme5", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme6", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme7", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme8", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.140 }, 00:22:50.140 "method": "bdev_nvme_attach_controller" 00:22:50.140 },{ 00:22:50.140 "params": { 00:22:50.140 "name": "Nvme9", 00:22:50.140 "trtype": "rdma", 00:22:50.140 "traddr": "192.168.100.8", 00:22:50.140 "adrfam": "ipv4", 00:22:50.140 "trsvcid": "4420", 00:22:50.140 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.140 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.140 "hdgst": false, 00:22:50.140 "ddgst": false 00:22:50.141 }, 00:22:50.141 "method": "bdev_nvme_attach_controller" 00:22:50.141 },{ 00:22:50.141 "params": { 00:22:50.141 "name": "Nvme10", 00:22:50.141 "trtype": "rdma", 00:22:50.141 "traddr": "192.168.100.8", 00:22:50.141 "adrfam": "ipv4", 00:22:50.141 "trsvcid": "4420", 00:22:50.141 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.141 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.141 "hdgst": false, 00:22:50.141 "ddgst": false 00:22:50.141 }, 00:22:50.141 "method": "bdev_nvme_attach_controller" 00:22:50.141 }' 00:22:50.141 [2024-12-13 11:17:10.645670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.398 [2024-12-13 11:17:10.710529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.330 Running I/O for 10 seconds... 00:22:51.587 11:17:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.587 11:17:12 -- common/autotest_common.sh@862 -- # return 0 00:22:51.587 11:17:12 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:51.587 11:17:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.587 11:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:51.845 11:17:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.845 11:17:12 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:51.845 11:17:12 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:51.845 11:17:12 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:51.845 11:17:12 -- target/shutdown.sh@57 -- # local ret=1 00:22:51.845 11:17:12 -- target/shutdown.sh@58 -- # local i 00:22:51.845 11:17:12 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:51.845 11:17:12 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:51.845 11:17:12 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.845 11:17:12 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.845 11:17:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.845 11:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:51.845 11:17:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.845 11:17:12 -- target/shutdown.sh@60 -- # read_io_count=461 00:22:51.845 11:17:12 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:22:51.845 11:17:12 -- target/shutdown.sh@64 -- # ret=0 00:22:51.845 11:17:12 -- target/shutdown.sh@65 -- # break 00:22:51.845 11:17:12 -- target/shutdown.sh@69 -- # return 0 00:22:51.845 11:17:12 -- target/shutdown.sh@109 -- # killprocess 1702129 00:22:51.845 11:17:12 -- common/autotest_common.sh@936 -- # '[' -z 1702129 ']' 00:22:51.845 11:17:12 -- common/autotest_common.sh@940 -- # kill -0 1702129 00:22:51.845 11:17:12 -- common/autotest_common.sh@941 -- # uname 00:22:51.845 11:17:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.845 11:17:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702129 00:22:51.845 11:17:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.845 11:17:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.845 11:17:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702129' 00:22:51.845 killing process with pid 1702129 00:22:51.845 11:17:12 -- common/autotest_common.sh@955 -- # kill 1702129 00:22:51.845 11:17:12 -- common/autotest_common.sh@960 -- # wait 1702129 00:22:52.103 Received shutdown signal, test time was about 0.884378 seconds 00:22:52.103 00:22:52.103 Latency(us) 00:22:52.103 [2024-12-13T10:17:12.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme1n1 : 0.87 749.96 46.87 0.00 0.00 84374.20 7621.59 104080.88 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme2n1 : 0.88 749.06 46.82 0.00 0.00 83708.77 7912.87 100973.99 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme3n1 : 0.88 748.17 46.76 0.00 0.00 83169.22 8155.59 98643.82 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme4n1 : 0.88 781.51 48.84 0.00 0.00 79000.39 8349.77 92818.39 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme5n1 : 0.88 774.88 48.43 0.00 0.00 79166.61 8592.50 93206.76 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme6n1 : 0.88 786.55 49.16 0.00 0.00 77349.25 8738.13 70681.79 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme7n1 : 0.88 785.69 49.11 0.00 0.00 76872.00 8835.22 69516.71 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme8n1 : 0.88 784.85 49.05 0.00 0.00 76404.86 8932.31 67963.26 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme9n1 : 0.88 784.02 49.00 0.00 0.00 75934.42 9029.40 66798.17 00:22:52.103 [2024-12-13T10:17:12.672Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.103 Verification LBA range: start 0x0 length 0x400 00:22:52.103 Nvme10n1 : 0.88 546.63 34.16 0.00 0.00 108100.28 7815.77 299815.06 00:22:52.103 [2024-12-13T10:17:12.672Z] =================================================================================================================== 00:22:52.103 [2024-12-13T10:17:12.672Z] Total : 7491.32 468.21 0.00 0.00 81579.18 7621.59 299815.06 00:22:52.360 11:17:12 -- target/shutdown.sh@112 -- # sleep 1 00:22:53.290 11:17:13 -- target/shutdown.sh@113 -- # kill -0 1701811 00:22:53.290 11:17:13 -- target/shutdown.sh@115 -- # stoptarget 00:22:53.290 11:17:13 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:53.290 11:17:13 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:53.290 11:17:13 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:53.290 11:17:13 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:53.290 11:17:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:53.290 11:17:13 -- nvmf/common.sh@116 -- # sync 00:22:53.290 11:17:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:53.290 11:17:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:53.290 11:17:13 -- nvmf/common.sh@119 -- # set +e 00:22:53.290 11:17:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:53.291 11:17:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:53.291 rmmod nvme_rdma 00:22:53.291 rmmod nvme_fabrics 00:22:53.291 11:17:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:53.291 11:17:13 -- nvmf/common.sh@123 -- # set -e 00:22:53.291 11:17:13 -- nvmf/common.sh@124 -- # return 0 00:22:53.291 11:17:13 -- nvmf/common.sh@477 -- # '[' -n 1701811 ']' 00:22:53.291 11:17:13 -- nvmf/common.sh@478 -- # killprocess 1701811 00:22:53.291 11:17:13 -- common/autotest_common.sh@936 -- # '[' -z 1701811 ']' 00:22:53.291 11:17:13 -- common/autotest_common.sh@940 -- # kill -0 1701811 00:22:53.291 11:17:13 -- common/autotest_common.sh@941 -- # uname 00:22:53.291 11:17:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.291 11:17:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1701811 00:22:53.548 11:17:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.548 11:17:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.548 11:17:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1701811' 00:22:53.548 killing process with pid 1701811 00:22:53.548 11:17:13 -- common/autotest_common.sh@955 -- # kill 1701811 00:22:53.548 11:17:13 -- common/autotest_common.sh@960 -- # wait 1701811 00:22:53.806 11:17:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.806 11:17:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:53.806 00:22:53.806 real 0m5.540s 00:22:53.806 user 0m22.568s 00:22:53.806 sys 0m1.047s 00:22:53.806 11:17:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:53.806 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:53.806 ************************************ 00:22:53.806 END TEST nvmf_shutdown_tc2 00:22:53.806 ************************************ 00:22:53.806 11:17:14 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:53.806 11:17:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:53.806 11:17:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:53.806 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:53.806 ************************************ 00:22:53.806 START TEST nvmf_shutdown_tc3 00:22:53.806 ************************************ 00:22:53.806 11:17:14 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:22:53.806 11:17:14 -- target/shutdown.sh@120 -- # starttarget 00:22:53.806 11:17:14 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:53.806 11:17:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:53.806 11:17:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.806 11:17:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:53.806 11:17:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:53.806 11:17:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:53.806 11:17:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.806 11:17:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.806 11:17:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.065 11:17:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:54.065 11:17:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:54.065 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:54.065 11:17:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:54.065 11:17:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:54.065 11:17:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:54.065 11:17:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:54.065 11:17:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:54.065 11:17:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:54.065 11:17:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:54.065 11:17:14 -- nvmf/common.sh@294 -- # net_devs=() 00:22:54.065 11:17:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:54.065 11:17:14 -- nvmf/common.sh@295 -- # e810=() 00:22:54.065 11:17:14 -- nvmf/common.sh@295 -- # local -ga e810 00:22:54.065 11:17:14 -- nvmf/common.sh@296 -- # x722=() 00:22:54.065 11:17:14 -- nvmf/common.sh@296 -- # local -ga x722 00:22:54.065 11:17:14 -- nvmf/common.sh@297 -- # mlx=() 00:22:54.065 11:17:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:54.065 11:17:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.065 11:17:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:54.065 11:17:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:54.065 11:17:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:54.065 11:17:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:54.065 11:17:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:54.065 11:17:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:54.065 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:54.065 11:17:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.065 11:17:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:54.065 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:54.065 11:17:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.065 11:17:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:54.065 11:17:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.065 11:17:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:54.065 11:17:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.065 11:17:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:54.065 Found net devices under 0000:18:00.0: mlx_0_0 00:22:54.065 11:17:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.065 11:17:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.065 11:17:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:54.065 11:17:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.065 11:17:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:54.065 Found net devices under 0000:18:00.1: mlx_0_1 00:22:54.065 11:17:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.065 11:17:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:54.065 11:17:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:54.065 11:17:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:54.065 11:17:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:54.065 11:17:14 -- nvmf/common.sh@57 -- # uname 00:22:54.065 11:17:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:54.065 11:17:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:54.065 11:17:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:54.065 11:17:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:54.065 11:17:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:54.065 11:17:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:54.065 11:17:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:54.065 11:17:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:54.065 11:17:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:54.065 11:17:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:54.065 11:17:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:54.065 11:17:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.065 11:17:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:54.065 11:17:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:54.065 11:17:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.065 11:17:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:54.065 11:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:54.065 11:17:14 -- nvmf/common.sh@104 -- # continue 2 00:22:54.065 11:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.065 11:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:54.065 11:17:14 -- nvmf/common.sh@104 -- # continue 2 00:22:54.065 11:17:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:54.065 11:17:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:54.065 11:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:54.065 11:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:54.065 11:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:54.065 11:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:54.065 11:17:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:54.065 11:17:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:54.065 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.065 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:54.065 altname enp24s0f0np0 00:22:54.065 altname ens785f0np0 00:22:54.065 inet 192.168.100.8/24 scope global mlx_0_0 00:22:54.065 valid_lft forever preferred_lft forever 00:22:54.065 11:17:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:54.065 11:17:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:54.065 11:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:54.065 11:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:54.065 11:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:54.065 11:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:54.065 11:17:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:54.065 11:17:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:54.065 11:17:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:54.065 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.065 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:54.065 altname enp24s0f1np1 00:22:54.066 altname ens785f1np1 00:22:54.066 inet 192.168.100.9/24 scope global mlx_0_1 00:22:54.066 valid_lft forever preferred_lft forever 00:22:54.066 11:17:14 -- nvmf/common.sh@410 -- # return 0 00:22:54.066 11:17:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:54.066 11:17:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:54.066 11:17:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:54.066 11:17:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:54.066 11:17:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:54.066 11:17:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.066 11:17:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:54.066 11:17:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:54.066 11:17:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.066 11:17:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:54.066 11:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:54.066 11:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.066 11:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.066 11:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:54.066 11:17:14 -- nvmf/common.sh@104 -- # continue 2 00:22:54.066 11:17:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:54.066 11:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.066 11:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.066 11:17:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.066 11:17:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.066 11:17:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:54.066 11:17:14 -- nvmf/common.sh@104 -- # continue 2 00:22:54.066 11:17:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:54.066 11:17:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:54.066 11:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:54.066 11:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:54.066 11:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:54.066 11:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:54.066 11:17:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:54.066 11:17:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:54.066 11:17:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:54.066 11:17:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:54.066 11:17:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:54.066 11:17:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:54.066 11:17:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:54.066 192.168.100.9' 00:22:54.066 11:17:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:54.066 192.168.100.9' 00:22:54.066 11:17:14 -- nvmf/common.sh@445 -- # head -n 1 00:22:54.066 11:17:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:54.066 11:17:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:54.066 192.168.100.9' 00:22:54.066 11:17:14 -- nvmf/common.sh@446 -- # tail -n +2 00:22:54.066 11:17:14 -- nvmf/common.sh@446 -- # head -n 1 00:22:54.066 11:17:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:54.066 11:17:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:54.066 11:17:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:54.066 11:17:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:54.066 11:17:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:54.066 11:17:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:54.066 11:17:14 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:54.066 11:17:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:54.066 11:17:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.066 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:54.066 11:17:14 -- nvmf/common.sh@469 -- # nvmfpid=1703037 00:22:54.066 11:17:14 -- nvmf/common.sh@470 -- # waitforlisten 1703037 00:22:54.066 11:17:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:54.066 11:17:14 -- common/autotest_common.sh@829 -- # '[' -z 1703037 ']' 00:22:54.066 11:17:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.066 11:17:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.066 11:17:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.066 11:17:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.066 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:54.323 [2024-12-13 11:17:14.649186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:54.323 [2024-12-13 11:17:14.649236] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.323 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.323 [2024-12-13 11:17:14.701401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.323 [2024-12-13 11:17:14.776432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.324 [2024-12-13 11:17:14.776534] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.324 [2024-12-13 11:17:14.776541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.324 [2024-12-13 11:17:14.776547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.324 [2024-12-13 11:17:14.776643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.324 [2024-12-13 11:17:14.776725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.324 [2024-12-13 11:17:14.776831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.324 [2024-12-13 11:17:14.776832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:55.253 11:17:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.253 11:17:15 -- common/autotest_common.sh@862 -- # return 0 00:22:55.253 11:17:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:55.253 11:17:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.253 11:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:55.253 11:17:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.253 11:17:15 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:55.253 11:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.253 11:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:55.253 [2024-12-13 11:17:15.517523] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7a5c50/0x7aa140) succeed. 00:22:55.253 [2024-12-13 11:17:15.525605] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7a7240/0x7eb7e0) succeed. 00:22:55.253 11:17:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.253 11:17:15 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:55.253 11:17:15 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:55.253 11:17:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.253 11:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:55.253 11:17:15 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:55.253 11:17:15 -- target/shutdown.sh@28 -- # cat 00:22:55.253 11:17:15 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:55.253 11:17:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.253 11:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:55.253 Malloc1 00:22:55.253 [2024-12-13 11:17:15.722446] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:55.253 Malloc2 00:22:55.253 Malloc3 00:22:55.511 Malloc4 00:22:55.511 Malloc5 00:22:55.511 Malloc6 00:22:55.511 Malloc7 00:22:55.511 Malloc8 00:22:55.511 Malloc9 00:22:55.768 Malloc10 00:22:55.769 11:17:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.769 11:17:16 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:55.769 11:17:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.769 11:17:16 -- common/autotest_common.sh@10 -- # set +x 00:22:55.769 11:17:16 -- target/shutdown.sh@124 -- # perfpid=1703353 00:22:55.769 11:17:16 -- target/shutdown.sh@125 -- # waitforlisten 1703353 /var/tmp/bdevperf.sock 00:22:55.769 11:17:16 -- common/autotest_common.sh@829 -- # '[' -z 1703353 ']' 00:22:55.769 11:17:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.769 11:17:16 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:55.769 11:17:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.769 11:17:16 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:55.769 11:17:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.769 11:17:16 -- nvmf/common.sh@520 -- # config=() 00:22:55.769 11:17:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.769 11:17:16 -- nvmf/common.sh@520 -- # local subsystem config 00:22:55.769 11:17:16 -- common/autotest_common.sh@10 -- # set +x 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 [2024-12-13 11:17:16.190166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.769 [2024-12-13 11:17:16.190212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1703353 ] 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 11:17:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:55.769 { 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme$subsystem", 00:22:55.769 "trtype": "$TEST_TRANSPORT", 00:22:55.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "$NVMF_PORT", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.769 "hdgst": ${hdgst:-false}, 00:22:55.769 "ddgst": ${ddgst:-false} 00:22:55.769 }, 00:22:55.769 "method": "bdev_nvme_attach_controller" 00:22:55.769 } 00:22:55.769 EOF 00:22:55.769 )") 00:22:55.769 11:17:16 -- nvmf/common.sh@542 -- # cat 00:22:55.769 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.769 11:17:16 -- nvmf/common.sh@544 -- # jq . 00:22:55.769 11:17:16 -- nvmf/common.sh@545 -- # IFS=, 00:22:55.769 11:17:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:55.769 "params": { 00:22:55.769 "name": "Nvme1", 00:22:55.769 "trtype": "rdma", 00:22:55.769 "traddr": "192.168.100.8", 00:22:55.769 "adrfam": "ipv4", 00:22:55.769 "trsvcid": "4420", 00:22:55.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.769 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme2", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme3", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme4", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme5", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme6", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme7", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme8", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme9", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 },{ 00:22:55.770 "params": { 00:22:55.770 "name": "Nvme10", 00:22:55.770 "trtype": "rdma", 00:22:55.770 "traddr": "192.168.100.8", 00:22:55.770 "adrfam": "ipv4", 00:22:55.770 "trsvcid": "4420", 00:22:55.770 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:55.770 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:55.770 "hdgst": false, 00:22:55.770 "ddgst": false 00:22:55.770 }, 00:22:55.770 "method": "bdev_nvme_attach_controller" 00:22:55.770 }' 00:22:55.770 [2024-12-13 11:17:16.244420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.770 [2024-12-13 11:17:16.308969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.701 Running I/O for 10 seconds... 00:22:57.265 11:17:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.265 11:17:17 -- common/autotest_common.sh@862 -- # return 0 00:22:57.265 11:17:17 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:57.265 11:17:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.265 11:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:57.265 11:17:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.265 11:17:17 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.265 11:17:17 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:57.265 11:17:17 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:57.265 11:17:17 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:57.265 11:17:17 -- target/shutdown.sh@57 -- # local ret=1 00:22:57.265 11:17:17 -- target/shutdown.sh@58 -- # local i 00:22:57.265 11:17:17 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:57.265 11:17:17 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:57.265 11:17:17 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:57.265 11:17:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.265 11:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:57.265 11:17:17 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:57.523 11:17:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.523 11:17:17 -- target/shutdown.sh@60 -- # read_io_count=461 00:22:57.523 11:17:17 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:22:57.523 11:17:17 -- target/shutdown.sh@64 -- # ret=0 00:22:57.523 11:17:17 -- target/shutdown.sh@65 -- # break 00:22:57.523 11:17:17 -- target/shutdown.sh@69 -- # return 0 00:22:57.523 11:17:17 -- target/shutdown.sh@134 -- # killprocess 1703037 00:22:57.523 11:17:17 -- common/autotest_common.sh@936 -- # '[' -z 1703037 ']' 00:22:57.523 11:17:17 -- common/autotest_common.sh@940 -- # kill -0 1703037 00:22:57.523 11:17:17 -- common/autotest_common.sh@941 -- # uname 00:22:57.523 11:17:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.523 11:17:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1703037 00:22:57.523 11:17:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:57.523 11:17:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:57.523 11:17:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1703037' 00:22:57.523 killing process with pid 1703037 00:22:57.523 11:17:17 -- common/autotest_common.sh@955 -- # kill 1703037 00:22:57.523 11:17:17 -- common/autotest_common.sh@960 -- # wait 1703037 00:22:58.087 11:17:18 -- target/shutdown.sh@135 -- # nvmfpid= 00:22:58.087 11:17:18 -- target/shutdown.sh@138 -- # sleep 1 00:22:58.664 [2024-12-13 11:17:19.036928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.664 [2024-12-13 11:17:19.036965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.665 [2024-12-13 11:17:19.036975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.036981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.665 [2024-12-13 11:17:19.036987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.036993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.665 [2024-12-13 11:17:19.036999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.037004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:58.665 [2024-12-13 11:17:19.039524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.039564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:58.665 [2024-12-13 11:17:19.039628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.039659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.039670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.039678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.039687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.039703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.039712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.042143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.042177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.665 [2024-12-13 11:17:19.042218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.042242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.042278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.042300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.042323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.042345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.042367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.042389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.044774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.044806] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:58.665 [2024-12-13 11:17:19.044845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.044869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.044892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.044914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.044937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.044959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.044989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.045011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.046965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.046997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:58.665 [2024-12-13 11:17:19.047035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.047057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.047079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.047123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.047144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.047167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.047187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.049211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.049244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:58.665 [2024-12-13 11:17:19.049295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.049344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.049365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.049388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.049409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.049431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.049453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.052005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.052019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:58.665 [2024-12-13 11:17:19.052035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.052044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.052055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.052064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.052072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.052081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.052090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.052098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.054368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.054401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:58.665 [2024-12-13 11:17:19.054439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.054462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.054486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.054507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.054530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.054551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.054573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.054595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.056953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.665 [2024-12-13 11:17:19.056985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:58.665 [2024-12-13 11:17:19.057022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.057045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.665 [2024-12-13 11:17:19.057069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.665 [2024-12-13 11:17:19.057091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.057114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.666 [2024-12-13 11:17:19.057135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.057157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.666 [2024-12-13 11:17:19.057178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.060187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.666 [2024-12-13 11:17:19.060219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:58.666 [2024-12-13 11:17:19.060257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.666 [2024-12-13 11:17:19.060322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.060345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.666 [2024-12-13 11:17:19.060367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.060389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.666 [2024-12-13 11:17:19.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.060433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.666 [2024-12-13 11:17:19.060454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:6970 cdw0:0 sqhd:e100 p:1 m:1 dnr:0 00:22:58.666 [2024-12-13 11:17:19.063042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.666 [2024-12-13 11:17:19.063074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:58.666 [2024-12-13 11:17:19.065425] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:22:58.666 [2024-12-13 11:17:19.065440] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.666 [2024-12-13 11:17:19.066538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x183400 00:22:58.666 [2024-12-13 11:17:19.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183a00 00:22:58.666 [2024-12-13 11:17:19.066676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x183a00 00:22:58.666 [2024-12-13 11:17:19.066763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000718fc80 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.066831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.066853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4ebc0 len:0x10000 key:0x183400 00:22:58.666 [2024-12-13 11:17:19.066875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071bfe00 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.066920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000710f880 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.066964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.066985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.066998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.067028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.067050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.067072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.067093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.067115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x184300 00:22:58.666 [2024-12-13 11:17:19.067136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183a00 00:22:58.666 [2024-12-13 11:17:19.067161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.666 [2024-12-13 11:17:19.067175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x183b00 00:22:58.666 [2024-12-13 11:17:19.067183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183a00 00:22:58.667 [2024-12-13 11:17:19.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183a00 00:22:58.667 [2024-12-13 11:17:19.067250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x183b00 00:22:58.667 [2024-12-13 11:17:19.067279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x183b00 00:22:58.667 [2024-12-13 11:17:19.067323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x183b00 00:22:58.667 [2024-12-13 11:17:19.067347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x183b00 00:22:58.667 [2024-12-13 11:17:19.067390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x183a00 00:22:58.667 [2024-12-13 11:17:19.067413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183a00 00:22:58.667 [2024-12-13 11:17:19.067437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x183b00 00:22:58.667 [2024-12-13 11:17:19.067461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183a00 00:22:58.667 [2024-12-13 11:17:19.067483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x183b00 00:22:58.667 [2024-12-13 11:17:19.067504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x183400 00:22:58.667 [2024-12-13 11:17:19.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x183a00 00:22:58.667 [2024-12-13 11:17:19.067569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011409000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001142a000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f23000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f02000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ec0000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001254f000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ef000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ce000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ad000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b58c000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56b000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4c6000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.067963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184300 00:22:58.667 [2024-12-13 11:17:19.067971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.667 [2024-12-13 11:17:19.071251] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:22:58.667 [2024-12-13 11:17:19.071298] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.667 [2024-12-13 11:17:19.071332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.071359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.071552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.071610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.071710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.071760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf500 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000086ed80 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x183b00 00:22:58.668 [2024-12-13 11:17:19.071827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.071873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.071895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084ec80 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.071941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.071978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.071989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.072032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195dff80 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.072096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001959fd80 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.072183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.072227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008cf080 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.072248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.072276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000085ed00 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.072298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x184000 00:22:58.668 [2024-12-13 11:17:19.072319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:22:58.668 [2024-12-13 11:17:19.072342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.668 [2024-12-13 11:17:19.072356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183c00 00:22:58.668 [2024-12-13 11:17:19.072365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183c00 00:22:58.669 [2024-12-13 11:17:19.072386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183c00 00:22:58.669 [2024-12-13 11:17:19.072407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f700 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005efe00 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:22:58.669 [2024-12-13 11:17:19.072512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x183c00 00:22:58.669 [2024-12-13 11:17:19.072533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001951f980 len:0x10000 key:0x182a00 00:22:58.669 [2024-12-13 11:17:19.072555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049f380 len:0x10000 key:0x184000 00:22:58.669 [2024-12-13 11:17:19.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x183c00 00:22:58.669 [2024-12-13 11:17:19.072665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011322000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001275f000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001273e000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001271d000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012846000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012825000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.072972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x184300 00:22:58.669 [2024-12-13 11:17:19.072982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.076053] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:22:58.669 [2024-12-13 11:17:19.076089] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.669 [2024-12-13 11:17:19.076123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x182c00 00:22:58.669 [2024-12-13 11:17:19.076147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.076186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182b00 00:22:58.669 [2024-12-13 11:17:19.076210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.076245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:22:58.669 [2024-12-13 11:17:19.076277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.669 [2024-12-13 11:17:19.076313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.076335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.076582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.076629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.076650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.076674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:22:58.670 [2024-12-13 11:17:19.076720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.076743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.076765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x182d00 00:22:58.670 [2024-12-13 11:17:19.076855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.076877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.076899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.076921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:22:58.670 [2024-12-13 11:17:19.076943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.076965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.076978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.076986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.077030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.077096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:22:58.670 [2024-12-13 11:17:19.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:22:58.670 [2024-12-13 11:17:19.077164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182b00 00:22:58.670 [2024-12-13 11:17:19.077186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x182c00 00:22:58.670 [2024-12-13 11:17:19.077251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.670 [2024-12-13 11:17:19.077264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:22:58.671 [2024-12-13 11:17:19.077289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:22:58.671 [2024-12-13 11:17:19.077310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x182b00 00:22:58.671 [2024-12-13 11:17:19.077331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x182b00 00:22:58.671 [2024-12-13 11:17:19.077353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:22:58.671 [2024-12-13 11:17:19.077377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182c00 00:22:58.671 [2024-12-13 11:17:19.077399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d818000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7f7000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7d6000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7b5000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a35000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a14000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129f3000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129d2000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129b1000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.077801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x184300 00:22:58.671 [2024-12-13 11:17:19.077809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.080904] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:22:58.671 [2024-12-13 11:17:19.080939] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.671 [2024-12-13 11:17:19.080972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.080994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.081057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.081121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.081179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:22:58.671 [2024-12-13 11:17:19.081237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a4f900 len:0x10000 key:0x182d00 00:22:58.671 [2024-12-13 11:17:19.081318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:22:58.671 [2024-12-13 11:17:19.081375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.081398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182f00 00:22:58.671 [2024-12-13 11:17:19.081420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.081443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182e00 00:22:58.671 [2024-12-13 11:17:19.081464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x182d00 00:22:58.671 [2024-12-13 11:17:19.081487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.671 [2024-12-13 11:17:19.081500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182f00 00:22:58.671 [2024-12-13 11:17:19.081508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:22:58.672 [2024-12-13 11:17:19.081533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.081557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:22:58.672 [2024-12-13 11:17:19.081713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f800 len:0x10000 key:0x182d00 00:22:58.672 [2024-12-13 11:17:19.081756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.081779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.081864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.081885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.081950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.081972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.081985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:22:58.672 [2024-12-13 11:17:19.081995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.082018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.082040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x182f00 00:22:58.672 [2024-12-13 11:17:19.082063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011511000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114f0000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.672 [2024-12-13 11:17:19.082334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012be2000 len:0x10000 key:0x184300 00:22:58.672 [2024-12-13 11:17:19.082343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012bc1000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ba0000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b949000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.082502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.088716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.088784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.088808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.088831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.088854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.088877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x184300 00:22:58.673 [2024-12-13 11:17:19.088886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.091870] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:22:58.673 [2024-12-13 11:17:19.091911] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.673 [2024-12-13 11:17:19.091949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:22:58.673 [2024-12-13 11:17:19.091974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183300 00:22:58.673 [2024-12-13 11:17:19.092229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:22:58.673 [2024-12-13 11:17:19.092301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:22:58.673 [2024-12-13 11:17:19.092357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183300 00:22:58.673 [2024-12-13 11:17:19.092422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:22:58.673 [2024-12-13 11:17:19.092468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183300 00:22:58.673 [2024-12-13 11:17:19.092490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183300 00:22:58.673 [2024-12-13 11:17:19.092512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:22:58.673 [2024-12-13 11:17:19.092534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183100 00:22:58.673 [2024-12-13 11:17:19.092625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.673 [2024-12-13 11:17:19.092638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:22:58.673 [2024-12-13 11:17:19.092647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.092668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:22:58.674 [2024-12-13 11:17:19.092691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:22:58.674 [2024-12-13 11:17:19.092712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.092734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183300 00:22:58.674 [2024-12-13 11:17:19.092757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:22:58.674 [2024-12-13 11:17:19.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.092802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:22:58.674 [2024-12-13 11:17:19.092825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183300 00:22:58.674 [2024-12-13 11:17:19.092847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:22:58.674 [2024-12-13 11:17:19.092871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.092893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.092915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.092937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:22:58.674 [2024-12-13 11:17:19.092958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:22:58.674 [2024-12-13 11:17:19.092981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.092993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.093002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:22:58.674 [2024-12-13 11:17:19.093024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183100 00:22:58.674 [2024-12-13 11:17:19.093047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.674 [2024-12-13 11:17:19.093476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x184300 00:22:58.674 [2024-12-13 11:17:19.093485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.093680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x184300 00:22:58.675 [2024-12-13 11:17:19.093688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.096629] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:22:58.675 [2024-12-13 11:17:19.096664] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.675 [2024-12-13 11:17:19.096700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.096724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.096761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.096787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.096823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.096847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.096881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.096906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.096941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.096966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183300 00:22:58.675 [2024-12-13 11:17:19.097085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183300 00:22:58.675 [2024-12-13 11:17:19.097774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183300 00:22:58.675 [2024-12-13 11:17:19.097847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183d00 00:22:58.675 [2024-12-13 11:17:19.097960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.675 [2024-12-13 11:17:19.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x184200 00:22:58.675 [2024-12-13 11:17:19.097982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.097995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183d00 00:22:58.676 [2024-12-13 11:17:19.098005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183d00 00:22:58.676 [2024-12-13 11:17:19.098029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x184200 00:22:58.676 [2024-12-13 11:17:19.098053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x184200 00:22:58.676 [2024-12-13 11:17:19.098076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183d00 00:22:58.676 [2024-12-13 11:17:19.098098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183d00 00:22:58.676 [2024-12-13 11:17:19.098121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x184200 00:22:58.676 [2024-12-13 11:17:19.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183d00 00:22:58.676 [2024-12-13 11:17:19.098166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183d00 00:22:58.676 [2024-12-13 11:17:19.098188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010116000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010137000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.676 [2024-12-13 11:17:19.098837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010158000 len:0x10000 key:0x184300 00:22:58.676 [2024-12-13 11:17:19.098849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.101900] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:22:58.677 [2024-12-13 11:17:19.101936] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.677 [2024-12-13 11:17:19.101971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x184200 00:22:58.677 [2024-12-13 11:17:19.101995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.102305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.102363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.102544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.102729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.102774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.102797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x184200 00:22:58.677 [2024-12-13 11:17:19.102865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.102976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.102989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.102998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.103012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183800 00:22:58.677 [2024-12-13 11:17:19.103021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.103034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.103043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.103057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x184200 00:22:58.677 [2024-12-13 11:17:19.103067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.103081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.103090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.103103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183500 00:22:58.677 [2024-12-13 11:17:19.103113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.677 [2024-12-13 11:17:19.103126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183600 00:22:58.677 [2024-12-13 11:17:19.103135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128a9000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012888000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.103800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x184300 00:22:58.678 [2024-12-13 11:17:19.103809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.106663] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:22:58.678 [2024-12-13 11:17:19.106699] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.678 [2024-12-13 11:17:19.106733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183200 00:22:58.678 [2024-12-13 11:17:19.106756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.106807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183f00 00:22:58.678 [2024-12-13 11:17:19.106838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.106873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183f00 00:22:58.678 [2024-12-13 11:17:19.106897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.106932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183200 00:22:58.678 [2024-12-13 11:17:19.106955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.678 [2024-12-13 11:17:19.106989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183700 00:22:58.679 [2024-12-13 11:17:19.107380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.107966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.107979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.107988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.108011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.108034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.108057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.108079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.108101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.108124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.108148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183f00 00:22:58.679 [2024-12-13 11:17:19.108170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.108192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183200 00:22:58.679 [2024-12-13 11:17:19.108216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:22:58.679 [2024-12-13 11:17:19.108239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:22:58.679 [2024-12-13 11:17:19.108263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:22:58.679 [2024-12-13 11:17:19.108291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:22:58.679 [2024-12-13 11:17:19.108315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184300 00:22:58.679 [2024-12-13 11:17:19.108339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.679 [2024-12-13 11:17:19.108353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184300 00:22:58.679 [2024-12-13 11:17:19.108362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.108870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184300 00:22:58.680 [2024-12-13 11:17:19.108879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111680] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:22:58.680 [2024-12-13 11:17:19.111696] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.680 [2024-12-13 11:17:19.111710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183900 00:22:58.680 [2024-12-13 11:17:19.111720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183900 00:22:58.680 [2024-12-13 11:17:19.111746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184400 00:22:58.680 [2024-12-13 11:17:19.111769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183900 00:22:58.680 [2024-12-13 11:17:19.111793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184400 00:22:58.680 [2024-12-13 11:17:19.111822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183900 00:22:58.680 [2024-12-13 11:17:19.111845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183700 00:22:58.680 [2024-12-13 11:17:19.111869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183700 00:22:58.680 [2024-12-13 11:17:19.111892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184400 00:22:58.680 [2024-12-13 11:17:19.111915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184400 00:22:58.680 [2024-12-13 11:17:19.111938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183900 00:22:58.680 [2024-12-13 11:17:19.111961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183900 00:22:58.680 [2024-12-13 11:17:19.111983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.111997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183700 00:22:58.680 [2024-12-13 11:17:19.112006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.680 [2024-12-13 11:17:19.112020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183700 00:22:58.680 [2024-12-13 11:17:19.112029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183700 00:22:58.681 [2024-12-13 11:17:19.112103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183700 00:22:58.681 [2024-12-13 11:17:19.112431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183700 00:22:58.681 [2024-12-13 11:17:19.112523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183700 00:22:58.681 [2024-12-13 11:17:19.112687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183700 00:22:58.681 [2024-12-13 11:17:19.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183900 00:22:58.681 [2024-12-13 11:17:19.112822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184400 00:22:58.681 [2024-12-13 11:17:19.112845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012699000 len:0x10000 key:0x184300 00:22:58.681 [2024-12-13 11:17:19.112869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.681 [2024-12-13 11:17:19.112883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ed000 len:0x10000 key:0x184300 00:22:58.681 [2024-12-13 11:17:19.112893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.112907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001210e000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.112917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.112931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.112940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.112954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.112963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.112977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.112986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d143000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d164000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d185000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54a000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b529000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e835000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.113191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e814000 len:0x10000 key:0x184300 00:22:58.682 [2024-12-13 11:17:19.113200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a2df000 sqhd:5310 p:0 m:0 dnr:0 00:22:58.682 [2024-12-13 11:17:19.130329] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:22:58.682 [2024-12-13 11:17:19.130373] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130524] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130562] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130593] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130624] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130655] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130684] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130714] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130746] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130776] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 [2024-12-13 11:17:19.130805] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:58.682 task offset: 83968 on job bdev=Nvme1n1 fails 00:22:58.682 00:22:58.682 Latency(us) 00:22:58.682 [2024-12-13T10:17:19.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme1n1 ended in about 1.91 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme1n1 : 1.91 329.55 20.60 33.58 0.00 175701.30 40972.14 1019060.53 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme2n1 ended in about 1.91 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme2n1 : 1.91 310.29 19.39 33.54 0.00 184818.62 41748.86 1099839.72 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme3n1 ended in about 1.91 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme3n1 : 1.91 314.71 19.67 33.46 0.00 181872.13 42719.76 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme4n1 ended in about 1.92 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme4n1 : 1.92 321.74 20.11 33.37 0.00 177778.74 40583.77 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme5n1 ended in about 1.93 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme5n1 : 1.93 323.00 20.19 33.18 0.00 176219.45 38253.61 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme6n1 ended in about 1.93 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme6n1 : 1.93 324.79 20.30 33.10 0.00 175408.77 38059.43 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme7n1 ended in about 1.94 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme7n1 : 1.94 323.93 20.25 33.01 0.00 175358.26 38836.15 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme8n1 ended in about 1.94 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme8n1 : 1.94 323.10 20.19 32.93 0.00 175055.62 39612.87 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme9n1 ended in about 1.95 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme9n1 : 1.95 322.26 20.14 32.84 0.00 175017.74 40389.59 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:58.682 [2024-12-13T10:17:19.251Z] Job: Nvme10n1 ended in about 1.95 seconds with error 00:22:58.682 Verification LBA range: start 0x0 length 0x400 00:22:58.682 Nvme10n1 : 1.95 214.54 13.41 32.77 0.00 250475.47 48739.37 1093625.93 00:22:58.682 [2024-12-13T10:17:19.251Z] =================================================================================================================== 00:22:58.682 [2024-12-13T10:17:19.251Z] Total : 3107.90 194.24 331.79 0.00 182729.45 38059.43 1099839.72 00:22:58.682 [2024-12-13 11:17:19.151856] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:58.682 [2024-12-13 11:17:19.151876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.682 [2024-12-13 11:17:19.151887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:58.682 [2024-12-13 11:17:19.151896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:58.683 [2024-12-13 11:17:19.151903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:58.683 [2024-12-13 11:17:19.151994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:58.683 [2024-12-13 11:17:19.152003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:58.683 [2024-12-13 11:17:19.152012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:58.683 [2024-12-13 11:17:19.152019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:58.683 [2024-12-13 11:17:19.152025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:58.683 [2024-12-13 11:17:19.152032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:58.683 [2024-12-13 11:17:19.164586] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.164636] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.164655] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:22:58.683 [2024-12-13 11:17:19.164734] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.164744] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.164750] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:22:58.683 [2024-12-13 11:17:19.164843] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.164853] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.164859] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:22:58.683 [2024-12-13 11:17:19.164943] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.164967] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.164984] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:22:58.683 [2024-12-13 11:17:19.165188] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.165215] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.165231] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:22:58.683 [2024-12-13 11:17:19.165341] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.165366] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.165383] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:22:58.683 [2024-12-13 11:17:19.165503] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.165528] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.165545] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:22:58.683 [2024-12-13 11:17:19.165655] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.165682] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.165699] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:22:58.683 [2024-12-13 11:17:19.165809] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.165834] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.165850] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:22:58.683 [2024-12-13 11:17:19.165972] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:58.683 [2024-12-13 11:17:19.165983] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:58.683 [2024-12-13 11:17:19.165990] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:22:58.941 11:17:19 -- target/shutdown.sh@141 -- # kill -9 1703353 00:22:58.941 11:17:19 -- target/shutdown.sh@143 -- # stoptarget 00:22:58.941 11:17:19 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:58.941 11:17:19 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:58.941 11:17:19 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.941 11:17:19 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:58.941 11:17:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:58.941 11:17:19 -- nvmf/common.sh@116 -- # sync 00:22:58.941 11:17:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:58.941 11:17:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:58.941 11:17:19 -- nvmf/common.sh@119 -- # set +e 00:22:58.941 11:17:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:58.941 11:17:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:58.941 rmmod nvme_rdma 00:22:59.198 rmmod nvme_fabrics 00:22:59.198 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 1703353 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:22:59.198 11:17:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:59.198 11:17:19 -- nvmf/common.sh@123 -- # set -e 00:22:59.198 11:17:19 -- nvmf/common.sh@124 -- # return 0 00:22:59.198 11:17:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:59.198 11:17:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:59.198 11:17:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:59.198 00:22:59.198 real 0m5.169s 00:22:59.198 user 0m17.822s 00:22:59.198 sys 0m1.098s 00:22:59.198 11:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:59.198 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.198 ************************************ 00:22:59.198 END TEST nvmf_shutdown_tc3 00:22:59.198 ************************************ 00:22:59.198 11:17:19 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:22:59.198 00:22:59.198 real 0m23.922s 00:22:59.198 user 1m13.189s 00:22:59.198 sys 0m7.693s 00:22:59.198 11:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:59.198 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.198 ************************************ 00:22:59.198 END TEST nvmf_shutdown 00:22:59.198 ************************************ 00:22:59.198 11:17:19 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:59.198 11:17:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.198 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.198 11:17:19 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:59.198 11:17:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.198 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.198 11:17:19 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:59.198 11:17:19 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:59.198 11:17:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:59.198 11:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:59.198 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.198 ************************************ 00:22:59.198 START TEST nvmf_multicontroller 00:22:59.198 ************************************ 00:22:59.198 11:17:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:59.198 * Looking for test storage... 00:22:59.198 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:59.198 11:17:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:59.199 11:17:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:59.199 11:17:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:59.456 11:17:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:59.456 11:17:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:59.456 11:17:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:59.456 11:17:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:59.456 11:17:19 -- scripts/common.sh@335 -- # IFS=.-: 00:22:59.456 11:17:19 -- scripts/common.sh@335 -- # read -ra ver1 00:22:59.456 11:17:19 -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.456 11:17:19 -- scripts/common.sh@336 -- # read -ra ver2 00:22:59.456 11:17:19 -- scripts/common.sh@337 -- # local 'op=<' 00:22:59.456 11:17:19 -- scripts/common.sh@339 -- # ver1_l=2 00:22:59.456 11:17:19 -- scripts/common.sh@340 -- # ver2_l=1 00:22:59.456 11:17:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:59.456 11:17:19 -- scripts/common.sh@343 -- # case "$op" in 00:22:59.456 11:17:19 -- scripts/common.sh@344 -- # : 1 00:22:59.456 11:17:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:59.456 11:17:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.456 11:17:19 -- scripts/common.sh@364 -- # decimal 1 00:22:59.456 11:17:19 -- scripts/common.sh@352 -- # local d=1 00:22:59.456 11:17:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.456 11:17:19 -- scripts/common.sh@354 -- # echo 1 00:22:59.456 11:17:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:59.456 11:17:19 -- scripts/common.sh@365 -- # decimal 2 00:22:59.456 11:17:19 -- scripts/common.sh@352 -- # local d=2 00:22:59.456 11:17:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.456 11:17:19 -- scripts/common.sh@354 -- # echo 2 00:22:59.456 11:17:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:59.456 11:17:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:59.456 11:17:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:59.456 11:17:19 -- scripts/common.sh@367 -- # return 0 00:22:59.457 11:17:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.457 11:17:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:59.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.457 --rc genhtml_branch_coverage=1 00:22:59.457 --rc genhtml_function_coverage=1 00:22:59.457 --rc genhtml_legend=1 00:22:59.457 --rc geninfo_all_blocks=1 00:22:59.457 --rc geninfo_unexecuted_blocks=1 00:22:59.457 00:22:59.457 ' 00:22:59.457 11:17:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:59.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.457 --rc genhtml_branch_coverage=1 00:22:59.457 --rc genhtml_function_coverage=1 00:22:59.457 --rc genhtml_legend=1 00:22:59.457 --rc geninfo_all_blocks=1 00:22:59.457 --rc geninfo_unexecuted_blocks=1 00:22:59.457 00:22:59.457 ' 00:22:59.457 11:17:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:59.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.457 --rc genhtml_branch_coverage=1 00:22:59.457 --rc genhtml_function_coverage=1 00:22:59.457 --rc genhtml_legend=1 00:22:59.457 --rc geninfo_all_blocks=1 00:22:59.457 --rc geninfo_unexecuted_blocks=1 00:22:59.457 00:22:59.457 ' 00:22:59.457 11:17:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:59.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.457 --rc genhtml_branch_coverage=1 00:22:59.457 --rc genhtml_function_coverage=1 00:22:59.457 --rc genhtml_legend=1 00:22:59.457 --rc geninfo_all_blocks=1 00:22:59.457 --rc geninfo_unexecuted_blocks=1 00:22:59.457 00:22:59.457 ' 00:22:59.457 11:17:19 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.457 11:17:19 -- nvmf/common.sh@7 -- # uname -s 00:22:59.457 11:17:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.457 11:17:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.457 11:17:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.457 11:17:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.457 11:17:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.457 11:17:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.457 11:17:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.457 11:17:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.457 11:17:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.457 11:17:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.457 11:17:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:59.457 11:17:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:59.457 11:17:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.457 11:17:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.457 11:17:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.457 11:17:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:59.457 11:17:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.457 11:17:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.457 11:17:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.457 11:17:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.457 11:17:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.457 11:17:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.457 11:17:19 -- paths/export.sh@5 -- # export PATH 00:22:59.457 11:17:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.457 11:17:19 -- nvmf/common.sh@46 -- # : 0 00:22:59.457 11:17:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:59.457 11:17:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:59.457 11:17:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:59.457 11:17:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.457 11:17:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.457 11:17:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:59.457 11:17:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:59.457 11:17:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:59.457 11:17:19 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.457 11:17:19 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.457 11:17:19 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:59.457 11:17:19 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:59.457 11:17:19 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.457 11:17:19 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:22:59.457 11:17:19 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:59.457 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:59.457 11:17:19 -- host/multicontroller.sh@20 -- # exit 0 00:22:59.457 00:22:59.457 real 0m0.199s 00:22:59.457 user 0m0.117s 00:22:59.457 sys 0m0.094s 00:22:59.457 11:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:59.457 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.457 ************************************ 00:22:59.457 END TEST nvmf_multicontroller 00:22:59.457 ************************************ 00:22:59.457 11:17:19 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:59.457 11:17:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:59.457 11:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:59.457 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.457 ************************************ 00:22:59.457 START TEST nvmf_aer 00:22:59.457 ************************************ 00:22:59.457 11:17:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:59.457 * Looking for test storage... 00:22:59.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:59.457 11:17:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:59.457 11:17:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:59.457 11:17:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:59.716 11:17:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:59.716 11:17:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:59.716 11:17:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:59.716 11:17:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:59.716 11:17:20 -- scripts/common.sh@335 -- # IFS=.-: 00:22:59.716 11:17:20 -- scripts/common.sh@335 -- # read -ra ver1 00:22:59.716 11:17:20 -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.716 11:17:20 -- scripts/common.sh@336 -- # read -ra ver2 00:22:59.716 11:17:20 -- scripts/common.sh@337 -- # local 'op=<' 00:22:59.716 11:17:20 -- scripts/common.sh@339 -- # ver1_l=2 00:22:59.716 11:17:20 -- scripts/common.sh@340 -- # ver2_l=1 00:22:59.716 11:17:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:59.716 11:17:20 -- scripts/common.sh@343 -- # case "$op" in 00:22:59.716 11:17:20 -- scripts/common.sh@344 -- # : 1 00:22:59.716 11:17:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:59.716 11:17:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.716 11:17:20 -- scripts/common.sh@364 -- # decimal 1 00:22:59.716 11:17:20 -- scripts/common.sh@352 -- # local d=1 00:22:59.716 11:17:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.716 11:17:20 -- scripts/common.sh@354 -- # echo 1 00:22:59.716 11:17:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:59.716 11:17:20 -- scripts/common.sh@365 -- # decimal 2 00:22:59.716 11:17:20 -- scripts/common.sh@352 -- # local d=2 00:22:59.716 11:17:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.716 11:17:20 -- scripts/common.sh@354 -- # echo 2 00:22:59.716 11:17:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:59.716 11:17:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:59.716 11:17:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:59.716 11:17:20 -- scripts/common.sh@367 -- # return 0 00:22:59.716 11:17:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.716 11:17:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.716 --rc genhtml_branch_coverage=1 00:22:59.716 --rc genhtml_function_coverage=1 00:22:59.716 --rc genhtml_legend=1 00:22:59.716 --rc geninfo_all_blocks=1 00:22:59.716 --rc geninfo_unexecuted_blocks=1 00:22:59.716 00:22:59.716 ' 00:22:59.716 11:17:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.716 --rc genhtml_branch_coverage=1 00:22:59.716 --rc genhtml_function_coverage=1 00:22:59.716 --rc genhtml_legend=1 00:22:59.716 --rc geninfo_all_blocks=1 00:22:59.716 --rc geninfo_unexecuted_blocks=1 00:22:59.716 00:22:59.716 ' 00:22:59.716 11:17:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.716 --rc genhtml_branch_coverage=1 00:22:59.716 --rc genhtml_function_coverage=1 00:22:59.716 --rc genhtml_legend=1 00:22:59.716 --rc geninfo_all_blocks=1 00:22:59.716 --rc geninfo_unexecuted_blocks=1 00:22:59.716 00:22:59.716 ' 00:22:59.716 11:17:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.716 --rc genhtml_branch_coverage=1 00:22:59.716 --rc genhtml_function_coverage=1 00:22:59.716 --rc genhtml_legend=1 00:22:59.716 --rc geninfo_all_blocks=1 00:22:59.716 --rc geninfo_unexecuted_blocks=1 00:22:59.716 00:22:59.716 ' 00:22:59.716 11:17:20 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.716 11:17:20 -- nvmf/common.sh@7 -- # uname -s 00:22:59.716 11:17:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.716 11:17:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.716 11:17:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.716 11:17:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.716 11:17:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.716 11:17:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.716 11:17:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.716 11:17:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.716 11:17:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.716 11:17:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.716 11:17:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:59.716 11:17:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:59.716 11:17:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.716 11:17:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.716 11:17:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.716 11:17:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:59.716 11:17:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.716 11:17:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.716 11:17:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.717 11:17:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.717 11:17:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.717 11:17:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.717 11:17:20 -- paths/export.sh@5 -- # export PATH 00:22:59.717 11:17:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.717 11:17:20 -- nvmf/common.sh@46 -- # : 0 00:22:59.717 11:17:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:59.717 11:17:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:59.717 11:17:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:59.717 11:17:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.717 11:17:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.717 11:17:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:59.717 11:17:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:59.717 11:17:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:59.717 11:17:20 -- host/aer.sh@11 -- # nvmftestinit 00:22:59.717 11:17:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:59.717 11:17:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.717 11:17:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:59.717 11:17:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:59.717 11:17:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:59.717 11:17:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.717 11:17:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.717 11:17:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.717 11:17:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:59.717 11:17:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:59.717 11:17:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:59.717 11:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.269 11:17:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:06.269 11:17:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:06.269 11:17:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:06.269 11:17:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:06.269 11:17:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:06.269 11:17:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:06.269 11:17:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:06.270 11:17:25 -- nvmf/common.sh@294 -- # net_devs=() 00:23:06.270 11:17:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:06.270 11:17:25 -- nvmf/common.sh@295 -- # e810=() 00:23:06.270 11:17:25 -- nvmf/common.sh@295 -- # local -ga e810 00:23:06.270 11:17:25 -- nvmf/common.sh@296 -- # x722=() 00:23:06.270 11:17:25 -- nvmf/common.sh@296 -- # local -ga x722 00:23:06.270 11:17:25 -- nvmf/common.sh@297 -- # mlx=() 00:23:06.270 11:17:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:06.270 11:17:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.270 11:17:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:06.270 11:17:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:06.270 11:17:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:06.270 11:17:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:06.270 11:17:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:06.270 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:06.270 11:17:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:06.270 11:17:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:06.270 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:06.270 11:17:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:06.270 11:17:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.270 11:17:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.270 11:17:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:06.270 Found net devices under 0000:18:00.0: mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.270 11:17:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.270 11:17:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.270 11:17:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:06.270 Found net devices under 0000:18:00.1: mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.270 11:17:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:06.270 11:17:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:06.270 11:17:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:06.270 11:17:25 -- nvmf/common.sh@57 -- # uname 00:23:06.270 11:17:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:06.270 11:17:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:06.270 11:17:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:06.270 11:17:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:06.270 11:17:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:06.270 11:17:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:06.270 11:17:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:06.270 11:17:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:06.270 11:17:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:06.270 11:17:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:06.270 11:17:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:06.270 11:17:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:06.270 11:17:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:06.270 11:17:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:06.270 11:17:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:06.270 11:17:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@104 -- # continue 2 00:23:06.270 11:17:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@104 -- # continue 2 00:23:06.270 11:17:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:06.270 11:17:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:06.270 11:17:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:06.270 11:17:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:06.270 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:06.270 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:06.270 altname enp24s0f0np0 00:23:06.270 altname ens785f0np0 00:23:06.270 inet 192.168.100.8/24 scope global mlx_0_0 00:23:06.270 valid_lft forever preferred_lft forever 00:23:06.270 11:17:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:06.270 11:17:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:06.270 11:17:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:06.270 11:17:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:06.270 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:06.270 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:06.270 altname enp24s0f1np1 00:23:06.270 altname ens785f1np1 00:23:06.270 inet 192.168.100.9/24 scope global mlx_0_1 00:23:06.270 valid_lft forever preferred_lft forever 00:23:06.270 11:17:25 -- nvmf/common.sh@410 -- # return 0 00:23:06.270 11:17:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:06.270 11:17:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:06.270 11:17:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:06.270 11:17:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:06.270 11:17:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:06.270 11:17:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:06.270 11:17:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:06.270 11:17:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:06.270 11:17:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:06.270 11:17:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@104 -- # continue 2 00:23:06.270 11:17:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.270 11:17:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:06.270 11:17:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@104 -- # continue 2 00:23:06.270 11:17:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:06.270 11:17:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:06.270 11:17:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:06.270 11:17:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:06.270 11:17:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:06.271 11:17:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:06.271 192.168.100.9' 00:23:06.271 11:17:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:06.271 192.168.100.9' 00:23:06.271 11:17:25 -- nvmf/common.sh@445 -- # head -n 1 00:23:06.271 11:17:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:06.271 11:17:25 -- nvmf/common.sh@446 -- # head -n 1 00:23:06.271 11:17:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:06.271 192.168.100.9' 00:23:06.271 11:17:25 -- nvmf/common.sh@446 -- # tail -n +2 00:23:06.271 11:17:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:06.271 11:17:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:06.271 11:17:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:06.271 11:17:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:06.271 11:17:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:06.271 11:17:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:06.271 11:17:25 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:06.271 11:17:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:06.271 11:17:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:06.271 11:17:25 -- common/autotest_common.sh@10 -- # set +x 00:23:06.271 11:17:25 -- nvmf/common.sh@469 -- # nvmfpid=1707384 00:23:06.271 11:17:25 -- nvmf/common.sh@470 -- # waitforlisten 1707384 00:23:06.271 11:17:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:06.271 11:17:25 -- common/autotest_common.sh@829 -- # '[' -z 1707384 ']' 00:23:06.271 11:17:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.271 11:17:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.271 11:17:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.271 11:17:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.271 11:17:25 -- common/autotest_common.sh@10 -- # set +x 00:23:06.271 [2024-12-13 11:17:25.846768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:06.271 [2024-12-13 11:17:25.846820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.271 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.271 [2024-12-13 11:17:25.899457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.271 [2024-12-13 11:17:25.973330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:06.271 [2024-12-13 11:17:25.973430] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.271 [2024-12-13 11:17:25.973437] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.271 [2024-12-13 11:17:25.973442] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.271 [2024-12-13 11:17:25.973482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.271 [2024-12-13 11:17:25.973572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.271 [2024-12-13 11:17:25.973645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.271 [2024-12-13 11:17:25.973646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.271 11:17:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.271 11:17:26 -- common/autotest_common.sh@862 -- # return 0 00:23:06.271 11:17:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:06.271 11:17:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:06.271 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.271 11:17:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.271 11:17:26 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:06.271 11:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.271 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.271 [2024-12-13 11:17:26.708747] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2120960/0x2124e50) succeed. 00:23:06.271 [2024-12-13 11:17:26.716933] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2121f50/0x21664f0) succeed. 00:23:06.271 11:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.271 11:17:26 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:06.271 11:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.271 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.528 Malloc0 00:23:06.528 11:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.528 11:17:26 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:06.528 11:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.528 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.528 11:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.528 11:17:26 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:06.528 11:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.528 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.528 11:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.528 11:17:26 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:06.528 11:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.528 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.528 [2024-12-13 11:17:26.873835] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:06.528 11:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.528 11:17:26 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:06.528 11:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.528 11:17:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.528 [2024-12-13 11:17:26.881494] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:06.528 [ 00:23:06.528 { 00:23:06.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.528 "subtype": "Discovery", 00:23:06.528 "listen_addresses": [], 00:23:06.528 "allow_any_host": true, 00:23:06.528 "hosts": [] 00:23:06.528 }, 00:23:06.528 { 00:23:06.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.528 "subtype": "NVMe", 00:23:06.528 "listen_addresses": [ 00:23:06.528 { 00:23:06.528 "transport": "RDMA", 00:23:06.528 "trtype": "RDMA", 00:23:06.528 "adrfam": "IPv4", 00:23:06.528 "traddr": "192.168.100.8", 00:23:06.528 "trsvcid": "4420" 00:23:06.528 } 00:23:06.528 ], 00:23:06.528 "allow_any_host": true, 00:23:06.528 "hosts": [], 00:23:06.528 "serial_number": "SPDK00000000000001", 00:23:06.528 "model_number": "SPDK bdev Controller", 00:23:06.528 "max_namespaces": 2, 00:23:06.528 "min_cntlid": 1, 00:23:06.528 "max_cntlid": 65519, 00:23:06.528 "namespaces": [ 00:23:06.528 { 00:23:06.528 "nsid": 1, 00:23:06.528 "bdev_name": "Malloc0", 00:23:06.528 "name": "Malloc0", 00:23:06.528 "nguid": "734DA0B4115E45FA9B390F3BF04AE22F", 00:23:06.528 "uuid": "734da0b4-115e-45fa-9b39-0f3bf04ae22f" 00:23:06.528 } 00:23:06.528 ] 00:23:06.528 } 00:23:06.528 ] 00:23:06.528 11:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.528 11:17:26 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:06.528 11:17:26 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:06.528 11:17:26 -- host/aer.sh@33 -- # aerpid=1707565 00:23:06.528 11:17:26 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:06.528 11:17:26 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:06.528 11:17:26 -- common/autotest_common.sh@1254 -- # local i=0 00:23:06.528 11:17:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.528 11:17:26 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:23:06.528 11:17:26 -- common/autotest_common.sh@1257 -- # i=1 00:23:06.528 11:17:26 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:06.528 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.528 11:17:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.528 11:17:26 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:23:06.528 11:17:26 -- common/autotest_common.sh@1257 -- # i=2 00:23:06.528 11:17:27 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:06.786 11:17:27 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.786 11:17:27 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:06.786 11:17:27 -- common/autotest_common.sh@1265 -- # return 0 00:23:06.786 11:17:27 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:06.786 11:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.786 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 Malloc1 00:23:06.786 11:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.786 11:17:27 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:06.786 11:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.786 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 11:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.786 11:17:27 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:06.786 11:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.786 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 [ 00:23:06.786 { 00:23:06.786 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.786 "subtype": "Discovery", 00:23:06.786 "listen_addresses": [], 00:23:06.786 "allow_any_host": true, 00:23:06.786 "hosts": [] 00:23:06.786 }, 00:23:06.786 { 00:23:06.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.786 "subtype": "NVMe", 00:23:06.786 "listen_addresses": [ 00:23:06.786 { 00:23:06.786 "transport": "RDMA", 00:23:06.786 "trtype": "RDMA", 00:23:06.786 "adrfam": "IPv4", 00:23:06.786 "traddr": "192.168.100.8", 00:23:06.786 "trsvcid": "4420" 00:23:06.786 } 00:23:06.786 ], 00:23:06.786 "allow_any_host": true, 00:23:06.786 "hosts": [], 00:23:06.786 "serial_number": "SPDK00000000000001", 00:23:06.786 "model_number": "SPDK bdev Controller", 00:23:06.786 "max_namespaces": 2, 00:23:06.786 "min_cntlid": 1, 00:23:06.786 "max_cntlid": 65519, 00:23:06.786 "namespaces": [ 00:23:06.786 { 00:23:06.786 "nsid": 1, 00:23:06.786 "bdev_name": "Malloc0", 00:23:06.786 "name": "Malloc0", 00:23:06.786 "nguid": "734DA0B4115E45FA9B390F3BF04AE22F", 00:23:06.786 "uuid": "734da0b4-115e-45fa-9b39-0f3bf04ae22f" 00:23:06.786 }, 00:23:06.786 { 00:23:06.786 "nsid": 2, 00:23:06.786 "bdev_name": "Malloc1", 00:23:06.786 "name": "Malloc1", 00:23:06.786 "nguid": "42BFD49108BE431480553B5D3B6EFA35", 00:23:06.786 "uuid": "42bfd491-08be-4314-8055-3b5d3b6efa35" 00:23:06.786 } 00:23:06.786 ] 00:23:06.786 } 00:23:06.786 ] 00:23:06.786 11:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.786 11:17:27 -- host/aer.sh@43 -- # wait 1707565 00:23:06.786 Asynchronous Event Request test 00:23:06.786 Attaching to 192.168.100.8 00:23:06.786 Attached to 192.168.100.8 00:23:06.786 Registering asynchronous event callbacks... 00:23:06.786 Starting namespace attribute notice tests for all controllers... 00:23:06.786 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:06.786 aer_cb - Changed Namespace 00:23:06.786 Cleaning up... 00:23:06.786 11:17:27 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:06.786 11:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.786 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 11:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.786 11:17:27 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:06.786 11:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.786 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 11:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.786 11:17:27 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.786 11:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.786 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:06.786 11:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.786 11:17:27 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:06.786 11:17:27 -- host/aer.sh@51 -- # nvmftestfini 00:23:06.786 11:17:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:06.786 11:17:27 -- nvmf/common.sh@116 -- # sync 00:23:06.786 11:17:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:06.786 11:17:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:06.786 11:17:27 -- nvmf/common.sh@119 -- # set +e 00:23:06.786 11:17:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:06.786 11:17:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:06.786 rmmod nvme_rdma 00:23:06.786 rmmod nvme_fabrics 00:23:06.786 11:17:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:06.786 11:17:27 -- nvmf/common.sh@123 -- # set -e 00:23:06.786 11:17:27 -- nvmf/common.sh@124 -- # return 0 00:23:06.786 11:17:27 -- nvmf/common.sh@477 -- # '[' -n 1707384 ']' 00:23:06.786 11:17:27 -- nvmf/common.sh@478 -- # killprocess 1707384 00:23:06.786 11:17:27 -- common/autotest_common.sh@936 -- # '[' -z 1707384 ']' 00:23:06.786 11:17:27 -- common/autotest_common.sh@940 -- # kill -0 1707384 00:23:06.786 11:17:27 -- common/autotest_common.sh@941 -- # uname 00:23:06.786 11:17:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:06.786 11:17:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1707384 00:23:07.043 11:17:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:07.043 11:17:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:07.043 11:17:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1707384' 00:23:07.043 killing process with pid 1707384 00:23:07.043 11:17:27 -- common/autotest_common.sh@955 -- # kill 1707384 00:23:07.043 [2024-12-13 11:17:27.358040] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:07.043 11:17:27 -- common/autotest_common.sh@960 -- # wait 1707384 00:23:07.300 11:17:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:07.300 11:17:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:07.300 00:23:07.300 real 0m7.723s 00:23:07.300 user 0m8.140s 00:23:07.300 sys 0m4.791s 00:23:07.300 11:17:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:07.300 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:07.300 ************************************ 00:23:07.300 END TEST nvmf_aer 00:23:07.300 ************************************ 00:23:07.301 11:17:27 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:07.301 11:17:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:07.301 11:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:07.301 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:07.301 ************************************ 00:23:07.301 START TEST nvmf_async_init 00:23:07.301 ************************************ 00:23:07.301 11:17:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:07.301 * Looking for test storage... 00:23:07.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:07.301 11:17:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:07.301 11:17:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:07.301 11:17:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:07.301 11:17:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:07.301 11:17:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:07.301 11:17:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:07.301 11:17:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:07.301 11:17:27 -- scripts/common.sh@335 -- # IFS=.-: 00:23:07.301 11:17:27 -- scripts/common.sh@335 -- # read -ra ver1 00:23:07.301 11:17:27 -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.301 11:17:27 -- scripts/common.sh@336 -- # read -ra ver2 00:23:07.301 11:17:27 -- scripts/common.sh@337 -- # local 'op=<' 00:23:07.301 11:17:27 -- scripts/common.sh@339 -- # ver1_l=2 00:23:07.301 11:17:27 -- scripts/common.sh@340 -- # ver2_l=1 00:23:07.301 11:17:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:07.301 11:17:27 -- scripts/common.sh@343 -- # case "$op" in 00:23:07.301 11:17:27 -- scripts/common.sh@344 -- # : 1 00:23:07.301 11:17:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:07.301 11:17:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.301 11:17:27 -- scripts/common.sh@364 -- # decimal 1 00:23:07.301 11:17:27 -- scripts/common.sh@352 -- # local d=1 00:23:07.301 11:17:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.301 11:17:27 -- scripts/common.sh@354 -- # echo 1 00:23:07.301 11:17:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:07.301 11:17:27 -- scripts/common.sh@365 -- # decimal 2 00:23:07.301 11:17:27 -- scripts/common.sh@352 -- # local d=2 00:23:07.301 11:17:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.301 11:17:27 -- scripts/common.sh@354 -- # echo 2 00:23:07.301 11:17:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:07.301 11:17:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:07.301 11:17:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:07.301 11:17:27 -- scripts/common.sh@367 -- # return 0 00:23:07.301 11:17:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.301 11:17:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:07.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.301 --rc genhtml_branch_coverage=1 00:23:07.301 --rc genhtml_function_coverage=1 00:23:07.301 --rc genhtml_legend=1 00:23:07.301 --rc geninfo_all_blocks=1 00:23:07.301 --rc geninfo_unexecuted_blocks=1 00:23:07.301 00:23:07.301 ' 00:23:07.301 11:17:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:07.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.301 --rc genhtml_branch_coverage=1 00:23:07.301 --rc genhtml_function_coverage=1 00:23:07.301 --rc genhtml_legend=1 00:23:07.301 --rc geninfo_all_blocks=1 00:23:07.301 --rc geninfo_unexecuted_blocks=1 00:23:07.301 00:23:07.301 ' 00:23:07.301 11:17:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:07.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.301 --rc genhtml_branch_coverage=1 00:23:07.301 --rc genhtml_function_coverage=1 00:23:07.301 --rc genhtml_legend=1 00:23:07.301 --rc geninfo_all_blocks=1 00:23:07.301 --rc geninfo_unexecuted_blocks=1 00:23:07.301 00:23:07.301 ' 00:23:07.301 11:17:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:07.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.301 --rc genhtml_branch_coverage=1 00:23:07.301 --rc genhtml_function_coverage=1 00:23:07.301 --rc genhtml_legend=1 00:23:07.301 --rc geninfo_all_blocks=1 00:23:07.301 --rc geninfo_unexecuted_blocks=1 00:23:07.301 00:23:07.301 ' 00:23:07.301 11:17:27 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.301 11:17:27 -- nvmf/common.sh@7 -- # uname -s 00:23:07.301 11:17:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.301 11:17:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.301 11:17:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.301 11:17:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.301 11:17:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.301 11:17:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.301 11:17:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.301 11:17:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.301 11:17:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.301 11:17:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.301 11:17:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:07.301 11:17:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:07.301 11:17:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.301 11:17:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.301 11:17:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.301 11:17:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:07.301 11:17:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.301 11:17:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.301 11:17:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.301 11:17:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.301 11:17:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.301 11:17:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.301 11:17:27 -- paths/export.sh@5 -- # export PATH 00:23:07.301 11:17:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.301 11:17:27 -- nvmf/common.sh@46 -- # : 0 00:23:07.301 11:17:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:07.301 11:17:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:07.301 11:17:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:07.301 11:17:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.301 11:17:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.301 11:17:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:07.301 11:17:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:07.301 11:17:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:07.301 11:17:27 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:07.301 11:17:27 -- host/async_init.sh@14 -- # null_block_size=512 00:23:07.301 11:17:27 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:07.301 11:17:27 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:07.301 11:17:27 -- host/async_init.sh@20 -- # uuidgen 00:23:07.301 11:17:27 -- host/async_init.sh@20 -- # tr -d - 00:23:07.301 11:17:27 -- host/async_init.sh@20 -- # nguid=1f1b451821e64e608ee72d5a67399c1f 00:23:07.301 11:17:27 -- host/async_init.sh@22 -- # nvmftestinit 00:23:07.301 11:17:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:07.301 11:17:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.301 11:17:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:07.301 11:17:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:07.301 11:17:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:07.301 11:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.301 11:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.301 11:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.560 11:17:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:07.560 11:17:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:07.560 11:17:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:07.560 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.817 11:17:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:12.817 11:17:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:12.817 11:17:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:12.817 11:17:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:12.817 11:17:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:12.817 11:17:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:12.817 11:17:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:12.817 11:17:32 -- nvmf/common.sh@294 -- # net_devs=() 00:23:12.817 11:17:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:12.817 11:17:32 -- nvmf/common.sh@295 -- # e810=() 00:23:12.817 11:17:32 -- nvmf/common.sh@295 -- # local -ga e810 00:23:12.817 11:17:32 -- nvmf/common.sh@296 -- # x722=() 00:23:12.817 11:17:32 -- nvmf/common.sh@296 -- # local -ga x722 00:23:12.817 11:17:32 -- nvmf/common.sh@297 -- # mlx=() 00:23:12.817 11:17:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:12.817 11:17:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.817 11:17:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.818 11:17:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:12.818 11:17:32 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:12.818 11:17:32 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:12.818 11:17:32 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:12.818 11:17:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:12.818 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:12.818 11:17:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:12.818 11:17:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:12.818 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:12.818 11:17:32 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:12.818 11:17:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.818 11:17:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.818 11:17:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:12.818 Found net devices under 0000:18:00.0: mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.818 11:17:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.818 11:17:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.818 11:17:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:12.818 Found net devices under 0000:18:00.1: mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.818 11:17:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:12.818 11:17:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:12.818 11:17:32 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:12.818 11:17:32 -- nvmf/common.sh@57 -- # uname 00:23:12.818 11:17:32 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:12.818 11:17:32 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:12.818 11:17:32 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:12.818 11:17:32 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:12.818 11:17:32 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:12.818 11:17:32 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:12.818 11:17:32 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:12.818 11:17:32 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:12.818 11:17:32 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:12.818 11:17:32 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:12.818 11:17:32 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:12.818 11:17:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:12.818 11:17:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:12.818 11:17:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:12.818 11:17:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:12.818 11:17:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@104 -- # continue 2 00:23:12.818 11:17:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@104 -- # continue 2 00:23:12.818 11:17:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:12.818 11:17:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.818 11:17:32 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:12.818 11:17:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:12.818 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:12.818 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:12.818 altname enp24s0f0np0 00:23:12.818 altname ens785f0np0 00:23:12.818 inet 192.168.100.8/24 scope global mlx_0_0 00:23:12.818 valid_lft forever preferred_lft forever 00:23:12.818 11:17:32 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:12.818 11:17:32 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.818 11:17:32 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:12.818 11:17:32 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:12.818 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:12.818 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:12.818 altname enp24s0f1np1 00:23:12.818 altname ens785f1np1 00:23:12.818 inet 192.168.100.9/24 scope global mlx_0_1 00:23:12.818 valid_lft forever preferred_lft forever 00:23:12.818 11:17:32 -- nvmf/common.sh@410 -- # return 0 00:23:12.818 11:17:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:12.818 11:17:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:12.818 11:17:32 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:12.818 11:17:32 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:12.818 11:17:32 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:12.818 11:17:32 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:12.818 11:17:32 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:12.818 11:17:32 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:12.818 11:17:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:12.818 11:17:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@104 -- # continue 2 00:23:12.818 11:17:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:12.818 11:17:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:12.818 11:17:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@104 -- # continue 2 00:23:12.818 11:17:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:12.818 11:17:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.818 11:17:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:12.818 11:17:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:12.818 11:17:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:12.818 11:17:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:12.818 192.168.100.9' 00:23:12.818 11:17:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:12.818 192.168.100.9' 00:23:12.818 11:17:33 -- nvmf/common.sh@445 -- # head -n 1 00:23:12.818 11:17:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:12.818 11:17:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:12.818 192.168.100.9' 00:23:12.818 11:17:33 -- nvmf/common.sh@446 -- # tail -n +2 00:23:12.818 11:17:33 -- nvmf/common.sh@446 -- # head -n 1 00:23:12.818 11:17:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:12.818 11:17:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:12.819 11:17:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:12.819 11:17:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:12.819 11:17:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:12.819 11:17:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:12.819 11:17:33 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:12.819 11:17:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:12.819 11:17:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.819 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:12.819 11:17:33 -- nvmf/common.sh@469 -- # nvmfpid=1710833 00:23:12.819 11:17:33 -- nvmf/common.sh@470 -- # waitforlisten 1710833 00:23:12.819 11:17:33 -- common/autotest_common.sh@829 -- # '[' -z 1710833 ']' 00:23:12.819 11:17:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.819 11:17:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.819 11:17:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.819 11:17:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.819 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:12.819 11:17:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:12.819 [2024-12-13 11:17:33.087876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:12.819 [2024-12-13 11:17:33.087920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.819 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.819 [2024-12-13 11:17:33.137627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.819 [2024-12-13 11:17:33.212904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:12.819 [2024-12-13 11:17:33.212994] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.819 [2024-12-13 11:17:33.213002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.819 [2024-12-13 11:17:33.213007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.819 [2024-12-13 11:17:33.213023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.384 11:17:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.384 11:17:33 -- common/autotest_common.sh@862 -- # return 0 00:23:13.384 11:17:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:13.384 11:17:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.384 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.384 11:17:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.384 11:17:33 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:13.384 11:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.384 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.384 [2024-12-13 11:17:33.912245] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xee47e0/0xee8cd0) succeed. 00:23:13.384 [2024-12-13 11:17:33.920428] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xee5ce0/0xf2a370) succeed. 00:23:13.641 11:17:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:33 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:13.641 11:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 null0 00:23:13.641 11:17:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:33 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:13.641 11:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 11:17:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:33 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:13.641 11:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 11:17:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:33 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1f1b451821e64e608ee72d5a67399c1f 00:23:13.641 11:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 11:17:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:33 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:13.641 11:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 [2024-12-13 11:17:33.999115] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:13.641 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:34 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:13.641 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 nvme0n1 00:23:13.641 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.641 11:17:34 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.641 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.641 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.641 [ 00:23:13.641 { 00:23:13.641 "name": "nvme0n1", 00:23:13.641 "aliases": [ 00:23:13.641 "1f1b4518-21e6-4e60-8ee7-2d5a67399c1f" 00:23:13.641 ], 00:23:13.641 "product_name": "NVMe disk", 00:23:13.641 "block_size": 512, 00:23:13.641 "num_blocks": 2097152, 00:23:13.641 "uuid": "1f1b4518-21e6-4e60-8ee7-2d5a67399c1f", 00:23:13.641 "assigned_rate_limits": { 00:23:13.641 "rw_ios_per_sec": 0, 00:23:13.641 "rw_mbytes_per_sec": 0, 00:23:13.641 "r_mbytes_per_sec": 0, 00:23:13.641 "w_mbytes_per_sec": 0 00:23:13.641 }, 00:23:13.641 "claimed": false, 00:23:13.641 "zoned": false, 00:23:13.641 "supported_io_types": { 00:23:13.641 "read": true, 00:23:13.641 "write": true, 00:23:13.641 "unmap": false, 00:23:13.641 "write_zeroes": true, 00:23:13.641 "flush": true, 00:23:13.641 "reset": true, 00:23:13.641 "compare": true, 00:23:13.641 "compare_and_write": true, 00:23:13.641 "abort": true, 00:23:13.641 "nvme_admin": true, 00:23:13.641 "nvme_io": true 00:23:13.641 }, 00:23:13.641 "memory_domains": [ 00:23:13.641 { 00:23:13.641 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:13.641 "dma_device_type": 0 00:23:13.641 } 00:23:13.641 ], 00:23:13.641 "driver_specific": { 00:23:13.641 "nvme": [ 00:23:13.641 { 00:23:13.641 "trid": { 00:23:13.641 "trtype": "RDMA", 00:23:13.641 "adrfam": "IPv4", 00:23:13.641 "traddr": "192.168.100.8", 00:23:13.641 "trsvcid": "4420", 00:23:13.641 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.641 }, 00:23:13.641 "ctrlr_data": { 00:23:13.641 "cntlid": 1, 00:23:13.641 "vendor_id": "0x8086", 00:23:13.641 "model_number": "SPDK bdev Controller", 00:23:13.641 "serial_number": "00000000000000000000", 00:23:13.641 "firmware_revision": "24.01.1", 00:23:13.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.641 "oacs": { 00:23:13.641 "security": 0, 00:23:13.641 "format": 0, 00:23:13.641 "firmware": 0, 00:23:13.641 "ns_manage": 0 00:23:13.641 }, 00:23:13.641 "multi_ctrlr": true, 00:23:13.641 "ana_reporting": false 00:23:13.641 }, 00:23:13.641 "vs": { 00:23:13.641 "nvme_version": "1.3" 00:23:13.641 }, 00:23:13.641 "ns_data": { 00:23:13.641 "id": 1, 00:23:13.641 "can_share": true 00:23:13.641 } 00:23:13.641 } 00:23:13.641 ], 00:23:13.641 "mp_policy": "active_passive" 00:23:13.641 } 00:23:13.641 } 00:23:13.641 ] 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 [2024-12-13 11:17:34.095691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.642 [2024-12-13 11:17:34.117990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:13.642 [2024-12-13 11:17:34.138092] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 [ 00:23:13.642 { 00:23:13.642 "name": "nvme0n1", 00:23:13.642 "aliases": [ 00:23:13.642 "1f1b4518-21e6-4e60-8ee7-2d5a67399c1f" 00:23:13.642 ], 00:23:13.642 "product_name": "NVMe disk", 00:23:13.642 "block_size": 512, 00:23:13.642 "num_blocks": 2097152, 00:23:13.642 "uuid": "1f1b4518-21e6-4e60-8ee7-2d5a67399c1f", 00:23:13.642 "assigned_rate_limits": { 00:23:13.642 "rw_ios_per_sec": 0, 00:23:13.642 "rw_mbytes_per_sec": 0, 00:23:13.642 "r_mbytes_per_sec": 0, 00:23:13.642 "w_mbytes_per_sec": 0 00:23:13.642 }, 00:23:13.642 "claimed": false, 00:23:13.642 "zoned": false, 00:23:13.642 "supported_io_types": { 00:23:13.642 "read": true, 00:23:13.642 "write": true, 00:23:13.642 "unmap": false, 00:23:13.642 "write_zeroes": true, 00:23:13.642 "flush": true, 00:23:13.642 "reset": true, 00:23:13.642 "compare": true, 00:23:13.642 "compare_and_write": true, 00:23:13.642 "abort": true, 00:23:13.642 "nvme_admin": true, 00:23:13.642 "nvme_io": true 00:23:13.642 }, 00:23:13.642 "memory_domains": [ 00:23:13.642 { 00:23:13.642 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:13.642 "dma_device_type": 0 00:23:13.642 } 00:23:13.642 ], 00:23:13.642 "driver_specific": { 00:23:13.642 "nvme": [ 00:23:13.642 { 00:23:13.642 "trid": { 00:23:13.642 "trtype": "RDMA", 00:23:13.642 "adrfam": "IPv4", 00:23:13.642 "traddr": "192.168.100.8", 00:23:13.642 "trsvcid": "4420", 00:23:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.642 }, 00:23:13.642 "ctrlr_data": { 00:23:13.642 "cntlid": 2, 00:23:13.642 "vendor_id": "0x8086", 00:23:13.642 "model_number": "SPDK bdev Controller", 00:23:13.642 "serial_number": "00000000000000000000", 00:23:13.642 "firmware_revision": "24.01.1", 00:23:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.642 "oacs": { 00:23:13.642 "security": 0, 00:23:13.642 "format": 0, 00:23:13.642 "firmware": 0, 00:23:13.642 "ns_manage": 0 00:23:13.642 }, 00:23:13.642 "multi_ctrlr": true, 00:23:13.642 "ana_reporting": false 00:23:13.642 }, 00:23:13.642 "vs": { 00:23:13.642 "nvme_version": "1.3" 00:23:13.642 }, 00:23:13.642 "ns_data": { 00:23:13.642 "id": 1, 00:23:13.642 "can_share": true 00:23:13.642 } 00:23:13.642 } 00:23:13.642 ], 00:23:13.642 "mp_policy": "active_passive" 00:23:13.642 } 00:23:13.642 } 00:23:13.642 ] 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@53 -- # mktemp 00:23:13.642 11:17:34 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XT9mUOQa9J 00:23:13.642 11:17:34 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.642 11:17:34 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XT9mUOQa9J 00:23:13.642 11:17:34 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 [2024-12-13 11:17:34.192446] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XT9mUOQa9J 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.642 11:17:34 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XT9mUOQa9J 00:23:13.642 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.642 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.642 [2024-12-13 11:17:34.208472] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.900 nvme0n1 00:23:13.900 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.900 11:17:34 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.900 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.900 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.900 [ 00:23:13.900 { 00:23:13.900 "name": "nvme0n1", 00:23:13.900 "aliases": [ 00:23:13.900 "1f1b4518-21e6-4e60-8ee7-2d5a67399c1f" 00:23:13.900 ], 00:23:13.900 "product_name": "NVMe disk", 00:23:13.900 "block_size": 512, 00:23:13.900 "num_blocks": 2097152, 00:23:13.900 "uuid": "1f1b4518-21e6-4e60-8ee7-2d5a67399c1f", 00:23:13.900 "assigned_rate_limits": { 00:23:13.900 "rw_ios_per_sec": 0, 00:23:13.900 "rw_mbytes_per_sec": 0, 00:23:13.900 "r_mbytes_per_sec": 0, 00:23:13.900 "w_mbytes_per_sec": 0 00:23:13.900 }, 00:23:13.900 "claimed": false, 00:23:13.900 "zoned": false, 00:23:13.900 "supported_io_types": { 00:23:13.900 "read": true, 00:23:13.900 "write": true, 00:23:13.900 "unmap": false, 00:23:13.900 "write_zeroes": true, 00:23:13.900 "flush": true, 00:23:13.900 "reset": true, 00:23:13.900 "compare": true, 00:23:13.900 "compare_and_write": true, 00:23:13.900 "abort": true, 00:23:13.900 "nvme_admin": true, 00:23:13.900 "nvme_io": true 00:23:13.900 }, 00:23:13.900 "memory_domains": [ 00:23:13.900 { 00:23:13.900 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:13.900 "dma_device_type": 0 00:23:13.900 } 00:23:13.900 ], 00:23:13.900 "driver_specific": { 00:23:13.900 "nvme": [ 00:23:13.900 { 00:23:13.900 "trid": { 00:23:13.900 "trtype": "RDMA", 00:23:13.900 "adrfam": "IPv4", 00:23:13.900 "traddr": "192.168.100.8", 00:23:13.900 "trsvcid": "4421", 00:23:13.900 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.900 }, 00:23:13.900 "ctrlr_data": { 00:23:13.900 "cntlid": 3, 00:23:13.900 "vendor_id": "0x8086", 00:23:13.900 "model_number": "SPDK bdev Controller", 00:23:13.900 "serial_number": "00000000000000000000", 00:23:13.900 "firmware_revision": "24.01.1", 00:23:13.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.900 "oacs": { 00:23:13.900 "security": 0, 00:23:13.900 "format": 0, 00:23:13.900 "firmware": 0, 00:23:13.900 "ns_manage": 0 00:23:13.900 }, 00:23:13.900 "multi_ctrlr": true, 00:23:13.900 "ana_reporting": false 00:23:13.900 }, 00:23:13.900 "vs": { 00:23:13.900 "nvme_version": "1.3" 00:23:13.900 }, 00:23:13.900 "ns_data": { 00:23:13.900 "id": 1, 00:23:13.900 "can_share": true 00:23:13.900 } 00:23:13.900 } 00:23:13.900 ], 00:23:13.900 "mp_policy": "active_passive" 00:23:13.900 } 00:23:13.900 } 00:23:13.900 ] 00:23:13.900 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.900 11:17:34 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.900 11:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.900 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:13.900 11:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.900 11:17:34 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XT9mUOQa9J 00:23:13.900 11:17:34 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:13.900 11:17:34 -- host/async_init.sh@78 -- # nvmftestfini 00:23:13.900 11:17:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:13.900 11:17:34 -- nvmf/common.sh@116 -- # sync 00:23:13.900 11:17:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:13.900 11:17:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:13.900 11:17:34 -- nvmf/common.sh@119 -- # set +e 00:23:13.900 11:17:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:13.900 11:17:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:13.900 rmmod nvme_rdma 00:23:13.900 rmmod nvme_fabrics 00:23:13.900 11:17:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:13.900 11:17:34 -- nvmf/common.sh@123 -- # set -e 00:23:13.900 11:17:34 -- nvmf/common.sh@124 -- # return 0 00:23:13.900 11:17:34 -- nvmf/common.sh@477 -- # '[' -n 1710833 ']' 00:23:13.900 11:17:34 -- nvmf/common.sh@478 -- # killprocess 1710833 00:23:13.900 11:17:34 -- common/autotest_common.sh@936 -- # '[' -z 1710833 ']' 00:23:13.900 11:17:34 -- common/autotest_common.sh@940 -- # kill -0 1710833 00:23:13.900 11:17:34 -- common/autotest_common.sh@941 -- # uname 00:23:13.900 11:17:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.900 11:17:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1710833 00:23:13.900 11:17:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:13.900 11:17:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:13.900 11:17:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1710833' 00:23:13.900 killing process with pid 1710833 00:23:13.900 11:17:34 -- common/autotest_common.sh@955 -- # kill 1710833 00:23:13.900 11:17:34 -- common/autotest_common.sh@960 -- # wait 1710833 00:23:14.158 11:17:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:14.158 11:17:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:14.158 00:23:14.158 real 0m6.973s 00:23:14.158 user 0m3.058s 00:23:14.158 sys 0m4.212s 00:23:14.158 11:17:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:14.158 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:14.158 ************************************ 00:23:14.158 END TEST nvmf_async_init 00:23:14.158 ************************************ 00:23:14.158 11:17:34 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:14.158 11:17:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:14.158 11:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:14.158 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:14.158 ************************************ 00:23:14.158 START TEST dma 00:23:14.158 ************************************ 00:23:14.158 11:17:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:14.414 * Looking for test storage... 00:23:14.414 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:14.414 11:17:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:14.414 11:17:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:14.414 11:17:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:14.414 11:17:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:14.414 11:17:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:14.414 11:17:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:14.414 11:17:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:14.414 11:17:34 -- scripts/common.sh@335 -- # IFS=.-: 00:23:14.414 11:17:34 -- scripts/common.sh@335 -- # read -ra ver1 00:23:14.414 11:17:34 -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.414 11:17:34 -- scripts/common.sh@336 -- # read -ra ver2 00:23:14.414 11:17:34 -- scripts/common.sh@337 -- # local 'op=<' 00:23:14.414 11:17:34 -- scripts/common.sh@339 -- # ver1_l=2 00:23:14.414 11:17:34 -- scripts/common.sh@340 -- # ver2_l=1 00:23:14.414 11:17:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:14.414 11:17:34 -- scripts/common.sh@343 -- # case "$op" in 00:23:14.414 11:17:34 -- scripts/common.sh@344 -- # : 1 00:23:14.414 11:17:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:14.414 11:17:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.414 11:17:34 -- scripts/common.sh@364 -- # decimal 1 00:23:14.414 11:17:34 -- scripts/common.sh@352 -- # local d=1 00:23:14.414 11:17:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.414 11:17:34 -- scripts/common.sh@354 -- # echo 1 00:23:14.414 11:17:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:14.414 11:17:34 -- scripts/common.sh@365 -- # decimal 2 00:23:14.414 11:17:34 -- scripts/common.sh@352 -- # local d=2 00:23:14.414 11:17:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.414 11:17:34 -- scripts/common.sh@354 -- # echo 2 00:23:14.415 11:17:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:14.415 11:17:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:14.415 11:17:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:14.415 11:17:34 -- scripts/common.sh@367 -- # return 0 00:23:14.415 11:17:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.415 11:17:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:14.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.415 --rc genhtml_branch_coverage=1 00:23:14.415 --rc genhtml_function_coverage=1 00:23:14.415 --rc genhtml_legend=1 00:23:14.415 --rc geninfo_all_blocks=1 00:23:14.415 --rc geninfo_unexecuted_blocks=1 00:23:14.415 00:23:14.415 ' 00:23:14.415 11:17:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:14.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.415 --rc genhtml_branch_coverage=1 00:23:14.415 --rc genhtml_function_coverage=1 00:23:14.415 --rc genhtml_legend=1 00:23:14.415 --rc geninfo_all_blocks=1 00:23:14.415 --rc geninfo_unexecuted_blocks=1 00:23:14.415 00:23:14.415 ' 00:23:14.415 11:17:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:14.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.415 --rc genhtml_branch_coverage=1 00:23:14.415 --rc genhtml_function_coverage=1 00:23:14.415 --rc genhtml_legend=1 00:23:14.415 --rc geninfo_all_blocks=1 00:23:14.415 --rc geninfo_unexecuted_blocks=1 00:23:14.415 00:23:14.415 ' 00:23:14.415 11:17:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:14.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.415 --rc genhtml_branch_coverage=1 00:23:14.415 --rc genhtml_function_coverage=1 00:23:14.415 --rc genhtml_legend=1 00:23:14.415 --rc geninfo_all_blocks=1 00:23:14.415 --rc geninfo_unexecuted_blocks=1 00:23:14.415 00:23:14.415 ' 00:23:14.415 11:17:34 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.415 11:17:34 -- nvmf/common.sh@7 -- # uname -s 00:23:14.415 11:17:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.415 11:17:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.415 11:17:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.415 11:17:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.415 11:17:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.415 11:17:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.415 11:17:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.415 11:17:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.415 11:17:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.415 11:17:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.415 11:17:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:14.415 11:17:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:14.415 11:17:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.415 11:17:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.415 11:17:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.415 11:17:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:14.415 11:17:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.415 11:17:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.415 11:17:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.415 11:17:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.415 11:17:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.415 11:17:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.415 11:17:34 -- paths/export.sh@5 -- # export PATH 00:23:14.415 11:17:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.415 11:17:34 -- nvmf/common.sh@46 -- # : 0 00:23:14.415 11:17:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:14.415 11:17:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:14.415 11:17:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:14.415 11:17:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.415 11:17:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.415 11:17:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:14.415 11:17:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:14.415 11:17:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:14.415 11:17:34 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:23:14.415 11:17:34 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:23:14.415 11:17:34 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:23:14.415 11:17:34 -- host/dma.sh@18 -- # subsystem=0 00:23:14.415 11:17:34 -- host/dma.sh@93 -- # nvmftestinit 00:23:14.415 11:17:34 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:14.415 11:17:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.415 11:17:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:14.415 11:17:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:14.415 11:17:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:14.415 11:17:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.415 11:17:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.415 11:17:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.415 11:17:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:14.415 11:17:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:14.415 11:17:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:14.415 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:23:19.671 11:17:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:19.671 11:17:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:19.671 11:17:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:19.671 11:17:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:19.671 11:17:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:19.671 11:17:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:19.671 11:17:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:19.671 11:17:40 -- nvmf/common.sh@294 -- # net_devs=() 00:23:19.671 11:17:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:19.671 11:17:40 -- nvmf/common.sh@295 -- # e810=() 00:23:19.671 11:17:40 -- nvmf/common.sh@295 -- # local -ga e810 00:23:19.671 11:17:40 -- nvmf/common.sh@296 -- # x722=() 00:23:19.671 11:17:40 -- nvmf/common.sh@296 -- # local -ga x722 00:23:19.671 11:17:40 -- nvmf/common.sh@297 -- # mlx=() 00:23:19.671 11:17:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:19.671 11:17:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.671 11:17:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:19.671 11:17:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:19.671 11:17:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:19.671 11:17:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:19.671 11:17:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:19.671 11:17:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:19.671 11:17:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:19.671 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:19.671 11:17:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:19.671 11:17:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:19.671 11:17:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:19.671 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:19.671 11:17:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:19.671 11:17:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:19.671 11:17:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:19.671 11:17:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.671 11:17:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:19.671 11:17:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.671 11:17:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:19.671 Found net devices under 0000:18:00.0: mlx_0_0 00:23:19.671 11:17:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.671 11:17:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:19.671 11:17:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.671 11:17:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:19.671 11:17:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.671 11:17:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:19.671 Found net devices under 0000:18:00.1: mlx_0_1 00:23:19.671 11:17:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.671 11:17:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:19.671 11:17:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:19.671 11:17:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:19.671 11:17:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:19.672 11:17:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:19.672 11:17:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:19.672 11:17:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:19.672 11:17:40 -- nvmf/common.sh@57 -- # uname 00:23:19.672 11:17:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:19.672 11:17:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:19.672 11:17:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:19.672 11:17:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:19.672 11:17:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:19.672 11:17:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:19.672 11:17:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:19.672 11:17:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:19.929 11:17:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:19.929 11:17:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:19.929 11:17:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:19.929 11:17:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:19.929 11:17:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:19.929 11:17:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:19.929 11:17:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:19.929 11:17:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:19.929 11:17:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.929 11:17:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.929 11:17:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:19.929 11:17:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:19.929 11:17:40 -- nvmf/common.sh@104 -- # continue 2 00:23:19.929 11:17:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.929 11:17:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.929 11:17:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:19.929 11:17:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.929 11:17:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:19.929 11:17:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:19.929 11:17:40 -- nvmf/common.sh@104 -- # continue 2 00:23:19.929 11:17:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:19.929 11:17:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:19.929 11:17:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:19.929 11:17:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:19.929 11:17:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.929 11:17:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.929 11:17:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:19.929 11:17:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:19.929 11:17:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:19.929 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:19.929 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:19.929 altname enp24s0f0np0 00:23:19.929 altname ens785f0np0 00:23:19.929 inet 192.168.100.8/24 scope global mlx_0_0 00:23:19.929 valid_lft forever preferred_lft forever 00:23:19.929 11:17:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:19.929 11:17:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:19.929 11:17:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:19.929 11:17:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:19.929 11:17:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.929 11:17:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.930 11:17:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:19.930 11:17:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:19.930 11:17:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:19.930 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:19.930 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:19.930 altname enp24s0f1np1 00:23:19.930 altname ens785f1np1 00:23:19.930 inet 192.168.100.9/24 scope global mlx_0_1 00:23:19.930 valid_lft forever preferred_lft forever 00:23:19.930 11:17:40 -- nvmf/common.sh@410 -- # return 0 00:23:19.930 11:17:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:19.930 11:17:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:19.930 11:17:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:19.930 11:17:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:19.930 11:17:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:19.930 11:17:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:19.930 11:17:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:19.930 11:17:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:19.930 11:17:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:19.930 11:17:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:19.930 11:17:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.930 11:17:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.930 11:17:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:19.930 11:17:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:19.930 11:17:40 -- nvmf/common.sh@104 -- # continue 2 00:23:19.930 11:17:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:19.930 11:17:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.930 11:17:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:19.930 11:17:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.930 11:17:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:19.930 11:17:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:19.930 11:17:40 -- nvmf/common.sh@104 -- # continue 2 00:23:19.930 11:17:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:19.930 11:17:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:19.930 11:17:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:19.930 11:17:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:19.930 11:17:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.930 11:17:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.930 11:17:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:19.930 11:17:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:19.930 11:17:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:19.930 11:17:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:19.930 11:17:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:19.930 11:17:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:19.930 11:17:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:19.930 192.168.100.9' 00:23:19.930 11:17:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:19.930 192.168.100.9' 00:23:19.930 11:17:40 -- nvmf/common.sh@445 -- # head -n 1 00:23:19.930 11:17:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:19.930 11:17:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:19.930 192.168.100.9' 00:23:19.930 11:17:40 -- nvmf/common.sh@446 -- # tail -n +2 00:23:19.930 11:17:40 -- nvmf/common.sh@446 -- # head -n 1 00:23:19.930 11:17:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:19.930 11:17:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:19.930 11:17:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:19.930 11:17:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:19.930 11:17:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:19.930 11:17:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:19.930 11:17:40 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:23:19.930 11:17:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:19.930 11:17:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.930 11:17:40 -- common/autotest_common.sh@10 -- # set +x 00:23:19.930 11:17:40 -- nvmf/common.sh@469 -- # nvmfpid=1714380 00:23:19.930 11:17:40 -- nvmf/common.sh@470 -- # waitforlisten 1714380 00:23:19.930 11:17:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:19.930 11:17:40 -- common/autotest_common.sh@829 -- # '[' -z 1714380 ']' 00:23:19.930 11:17:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.930 11:17:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.930 11:17:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.930 11:17:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.930 11:17:40 -- common/autotest_common.sh@10 -- # set +x 00:23:19.930 [2024-12-13 11:17:40.442191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:19.930 [2024-12-13 11:17:40.442231] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.930 [2024-12-13 11:17:40.493732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:20.187 [2024-12-13 11:17:40.564459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:20.187 [2024-12-13 11:17:40.564562] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.187 [2024-12-13 11:17:40.564570] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.187 [2024-12-13 11:17:40.564576] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.187 [2024-12-13 11:17:40.567284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.187 [2024-12-13 11:17:40.567287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.751 11:17:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.751 11:17:41 -- common/autotest_common.sh@862 -- # return 0 00:23:20.751 11:17:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:20.751 11:17:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:20.751 11:17:41 -- common/autotest_common.sh@10 -- # set +x 00:23:20.751 11:17:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.751 11:17:41 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:20.751 11:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.751 11:17:41 -- common/autotest_common.sh@10 -- # set +x 00:23:20.751 [2024-12-13 11:17:41.283694] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f1b320/0x1f1f810) succeed. 00:23:20.751 [2024-12-13 11:17:41.291561] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f1c820/0x1f60eb0) succeed. 00:23:21.009 11:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.009 11:17:41 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:23:21.009 11:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.009 11:17:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.009 Malloc0 00:23:21.009 11:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.009 11:17:41 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:21.009 11:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.009 11:17:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.009 11:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.009 11:17:41 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:23:21.009 11:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.009 11:17:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.009 11:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.009 11:17:41 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:21.009 11:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.009 11:17:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.009 [2024-12-13 11:17:41.442590] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:21.009 11:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.009 11:17:41 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:23:21.009 11:17:41 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:23:21.009 11:17:41 -- nvmf/common.sh@520 -- # config=() 00:23:21.009 11:17:41 -- nvmf/common.sh@520 -- # local subsystem config 00:23:21.009 11:17:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:21.009 11:17:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:21.009 { 00:23:21.009 "params": { 00:23:21.009 "name": "Nvme$subsystem", 00:23:21.009 "trtype": "$TEST_TRANSPORT", 00:23:21.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.009 "adrfam": "ipv4", 00:23:21.009 "trsvcid": "$NVMF_PORT", 00:23:21.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.009 "hdgst": ${hdgst:-false}, 00:23:21.009 "ddgst": ${ddgst:-false} 00:23:21.009 }, 00:23:21.009 "method": "bdev_nvme_attach_controller" 00:23:21.009 } 00:23:21.009 EOF 00:23:21.009 )") 00:23:21.009 11:17:41 -- nvmf/common.sh@542 -- # cat 00:23:21.009 11:17:41 -- nvmf/common.sh@544 -- # jq . 00:23:21.009 11:17:41 -- nvmf/common.sh@545 -- # IFS=, 00:23:21.009 11:17:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:21.009 "params": { 00:23:21.009 "name": "Nvme0", 00:23:21.009 "trtype": "rdma", 00:23:21.009 "traddr": "192.168.100.8", 00:23:21.009 "adrfam": "ipv4", 00:23:21.009 "trsvcid": "4420", 00:23:21.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:21.009 "hdgst": false, 00:23:21.009 "ddgst": false 00:23:21.009 }, 00:23:21.009 "method": "bdev_nvme_attach_controller" 00:23:21.009 }' 00:23:21.009 [2024-12-13 11:17:41.489895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:21.009 [2024-12-13 11:17:41.489937] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714662 ] 00:23:21.009 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.009 [2024-12-13 11:17:41.537202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.266 [2024-12-13 11:17:41.604304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.266 [2024-12-13 11:17:41.604308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.524 bdev Nvme0n1 reports 1 memory domains 00:23:26.524 bdev Nvme0n1 supports RDMA memory domain 00:23:26.524 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:26.524 ========================================================================== 00:23:26.524 Latency [us] 00:23:26.524 IOPS MiB/s Average min max 00:23:26.524 Core 2: 23165.55 90.49 690.01 219.23 8408.68 00:23:26.524 Core 3: 23263.73 90.87 687.10 224.83 8477.40 00:23:26.524 ========================================================================== 00:23:26.524 Total : 46429.29 181.36 688.55 219.23 8477.40 00:23:26.524 00:23:26.524 Total operations: 232191, translate 232191 pull_push 0 memzero 0 00:23:26.524 11:17:46 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:23:26.524 11:17:46 -- host/dma.sh@107 -- # gen_malloc_json 00:23:26.524 11:17:46 -- host/dma.sh@21 -- # jq . 00:23:26.524 [2024-12-13 11:17:47.035496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:26.524 [2024-12-13 11:17:47.035546] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715660 ] 00:23:26.524 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.524 [2024-12-13 11:17:47.082381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:26.783 [2024-12-13 11:17:47.149122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.783 [2024-12-13 11:17:47.149126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.039 bdev Malloc0 reports 1 memory domains 00:23:32.039 bdev Malloc0 doesn't support RDMA memory domain 00:23:32.039 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:32.039 ========================================================================== 00:23:32.039 Latency [us] 00:23:32.039 IOPS MiB/s Average min max 00:23:32.039 Core 2: 15578.31 60.85 1026.38 389.67 2165.10 00:23:32.039 Core 3: 15828.03 61.83 1010.17 406.81 1619.36 00:23:32.039 ========================================================================== 00:23:32.039 Total : 31406.34 122.68 1018.21 389.67 2165.10 00:23:32.039 00:23:32.039 Total operations: 157085, translate 0 pull_push 628340 memzero 0 00:23:32.039 11:17:52 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:23:32.039 11:17:52 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:23:32.039 11:17:52 -- host/dma.sh@48 -- # local subsystem=0 00:23:32.039 11:17:52 -- host/dma.sh@50 -- # jq . 00:23:32.039 Ignoring -M option 00:23:32.039 [2024-12-13 11:17:52.507261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:32.039 [2024-12-13 11:17:52.507316] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716535 ] 00:23:32.039 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.039 [2024-12-13 11:17:52.555215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.297 [2024-12-13 11:17:52.617708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.297 [2024-12-13 11:17:52.617711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.297 [2024-12-13 11:17:52.815329] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:23:37.646 [2024-12-13 11:17:57.843310] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:23:37.646 bdev 3ece7545-61a5-455a-8303-efe423e569df reports 1 memory domains 00:23:37.646 bdev 3ece7545-61a5-455a-8303-efe423e569df supports RDMA memory domain 00:23:37.646 Initialization complete, running randread IO for 5 sec on 2 cores 00:23:37.646 ========================================================================== 00:23:37.646 Latency [us] 00:23:37.646 IOPS MiB/s Average min max 00:23:37.646 Core 2: 75557.32 295.15 210.94 85.31 2824.86 00:23:37.646 Core 3: 76340.62 298.21 208.75 79.08 2723.73 00:23:37.646 ========================================================================== 00:23:37.646 Total : 151897.94 593.35 209.84 79.08 2824.86 00:23:37.646 00:23:37.646 Total operations: 759581, translate 0 pull_push 0 memzero 759581 00:23:37.646 11:17:58 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:23:37.646 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.646 [2024-12-13 11:17:58.155554] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:40.169 Initializing NVMe Controllers 00:23:40.169 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:23:40.170 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:40.170 Initialization complete. Launching workers. 00:23:40.170 ======================================================== 00:23:40.170 Latency(us) 00:23:40.170 Device Information : IOPS MiB/s Average min max 00:23:40.170 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.61 5742.24 9222.84 00:23:40.170 ======================================================== 00:23:40.170 Total : 2016.00 7.88 7972.61 5742.24 9222.84 00:23:40.170 00:23:40.170 11:18:00 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:23:40.170 11:18:00 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:23:40.170 11:18:00 -- host/dma.sh@48 -- # local subsystem=0 00:23:40.170 11:18:00 -- host/dma.sh@50 -- # jq . 00:23:40.170 [2024-12-13 11:18:00.490755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:40.170 [2024-12-13 11:18:00.490799] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718094 ] 00:23:40.170 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.170 [2024-12-13 11:18:00.538112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:40.170 [2024-12-13 11:18:00.603997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.170 [2024-12-13 11:18:00.604000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.427 [2024-12-13 11:18:00.804714] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:23:45.685 [2024-12-13 11:18:05.833332] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:23:45.685 bdev 4b3ab59d-1bab-42a6-b3a0-9d122ba6f094 reports 1 memory domains 00:23:45.685 bdev 4b3ab59d-1bab-42a6-b3a0-9d122ba6f094 supports RDMA memory domain 00:23:45.685 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:45.685 ========================================================================== 00:23:45.685 Latency [us] 00:23:45.685 IOPS MiB/s Average min max 00:23:45.685 Core 2: 20226.40 79.01 790.43 39.64 12210.60 00:23:45.685 Core 3: 20690.09 80.82 772.68 11.09 12305.09 00:23:45.685 ========================================================================== 00:23:45.685 Total : 40916.49 159.83 781.46 11.09 12305.09 00:23:45.685 00:23:45.685 Total operations: 204633, translate 204524 pull_push 0 memzero 109 00:23:45.685 11:18:06 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:45.685 11:18:06 -- host/dma.sh@120 -- # nvmftestfini 00:23:45.685 11:18:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:45.685 11:18:06 -- nvmf/common.sh@116 -- # sync 00:23:45.685 11:18:06 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:45.685 11:18:06 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:45.685 11:18:06 -- nvmf/common.sh@119 -- # set +e 00:23:45.685 11:18:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:45.685 11:18:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:45.685 rmmod nvme_rdma 00:23:45.685 rmmod nvme_fabrics 00:23:45.685 11:18:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:45.685 11:18:06 -- nvmf/common.sh@123 -- # set -e 00:23:45.685 11:18:06 -- nvmf/common.sh@124 -- # return 0 00:23:45.685 11:18:06 -- nvmf/common.sh@477 -- # '[' -n 1714380 ']' 00:23:45.685 11:18:06 -- nvmf/common.sh@478 -- # killprocess 1714380 00:23:45.685 11:18:06 -- common/autotest_common.sh@936 -- # '[' -z 1714380 ']' 00:23:45.685 11:18:06 -- common/autotest_common.sh@940 -- # kill -0 1714380 00:23:45.685 11:18:06 -- common/autotest_common.sh@941 -- # uname 00:23:45.685 11:18:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:45.685 11:18:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1714380 00:23:45.685 11:18:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:45.685 11:18:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:45.685 11:18:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1714380' 00:23:45.685 killing process with pid 1714380 00:23:45.685 11:18:06 -- common/autotest_common.sh@955 -- # kill 1714380 00:23:45.685 11:18:06 -- common/autotest_common.sh@960 -- # wait 1714380 00:23:45.943 11:18:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:45.943 11:18:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:45.943 00:23:45.943 real 0m31.819s 00:23:45.943 user 1m36.098s 00:23:45.943 sys 0m5.283s 00:23:45.943 11:18:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:45.943 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:23:45.943 ************************************ 00:23:45.943 END TEST dma 00:23:45.943 ************************************ 00:23:46.201 11:18:06 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:46.201 11:18:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:46.201 11:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:46.201 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:23:46.201 ************************************ 00:23:46.201 START TEST nvmf_identify 00:23:46.201 ************************************ 00:23:46.201 11:18:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:46.201 * Looking for test storage... 00:23:46.201 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:46.201 11:18:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:46.201 11:18:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:46.201 11:18:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:46.201 11:18:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:46.201 11:18:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:46.201 11:18:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:46.201 11:18:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:46.201 11:18:06 -- scripts/common.sh@335 -- # IFS=.-: 00:23:46.201 11:18:06 -- scripts/common.sh@335 -- # read -ra ver1 00:23:46.201 11:18:06 -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.201 11:18:06 -- scripts/common.sh@336 -- # read -ra ver2 00:23:46.201 11:18:06 -- scripts/common.sh@337 -- # local 'op=<' 00:23:46.201 11:18:06 -- scripts/common.sh@339 -- # ver1_l=2 00:23:46.201 11:18:06 -- scripts/common.sh@340 -- # ver2_l=1 00:23:46.201 11:18:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:46.201 11:18:06 -- scripts/common.sh@343 -- # case "$op" in 00:23:46.201 11:18:06 -- scripts/common.sh@344 -- # : 1 00:23:46.201 11:18:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:46.201 11:18:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.201 11:18:06 -- scripts/common.sh@364 -- # decimal 1 00:23:46.201 11:18:06 -- scripts/common.sh@352 -- # local d=1 00:23:46.201 11:18:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.201 11:18:06 -- scripts/common.sh@354 -- # echo 1 00:23:46.201 11:18:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:46.201 11:18:06 -- scripts/common.sh@365 -- # decimal 2 00:23:46.201 11:18:06 -- scripts/common.sh@352 -- # local d=2 00:23:46.201 11:18:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.201 11:18:06 -- scripts/common.sh@354 -- # echo 2 00:23:46.201 11:18:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:46.201 11:18:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:46.201 11:18:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:46.201 11:18:06 -- scripts/common.sh@367 -- # return 0 00:23:46.201 11:18:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.201 11:18:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:46.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.201 --rc genhtml_branch_coverage=1 00:23:46.201 --rc genhtml_function_coverage=1 00:23:46.201 --rc genhtml_legend=1 00:23:46.201 --rc geninfo_all_blocks=1 00:23:46.201 --rc geninfo_unexecuted_blocks=1 00:23:46.201 00:23:46.201 ' 00:23:46.201 11:18:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:46.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.201 --rc genhtml_branch_coverage=1 00:23:46.201 --rc genhtml_function_coverage=1 00:23:46.201 --rc genhtml_legend=1 00:23:46.201 --rc geninfo_all_blocks=1 00:23:46.201 --rc geninfo_unexecuted_blocks=1 00:23:46.201 00:23:46.201 ' 00:23:46.201 11:18:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:46.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.201 --rc genhtml_branch_coverage=1 00:23:46.201 --rc genhtml_function_coverage=1 00:23:46.201 --rc genhtml_legend=1 00:23:46.201 --rc geninfo_all_blocks=1 00:23:46.201 --rc geninfo_unexecuted_blocks=1 00:23:46.201 00:23:46.201 ' 00:23:46.202 11:18:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:46.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.202 --rc genhtml_branch_coverage=1 00:23:46.202 --rc genhtml_function_coverage=1 00:23:46.202 --rc genhtml_legend=1 00:23:46.202 --rc geninfo_all_blocks=1 00:23:46.202 --rc geninfo_unexecuted_blocks=1 00:23:46.202 00:23:46.202 ' 00:23:46.202 11:18:06 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.202 11:18:06 -- nvmf/common.sh@7 -- # uname -s 00:23:46.202 11:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.202 11:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.202 11:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.202 11:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.202 11:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.202 11:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.202 11:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.202 11:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.202 11:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.202 11:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.202 11:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:46.202 11:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:46.202 11:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.202 11:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.202 11:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.202 11:18:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:46.202 11:18:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.202 11:18:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.202 11:18:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.202 11:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.202 11:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.202 11:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.202 11:18:06 -- paths/export.sh@5 -- # export PATH 00:23:46.202 11:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.202 11:18:06 -- nvmf/common.sh@46 -- # : 0 00:23:46.202 11:18:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:46.202 11:18:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:46.202 11:18:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:46.202 11:18:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.202 11:18:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.202 11:18:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:46.202 11:18:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:46.202 11:18:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:46.202 11:18:06 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:46.202 11:18:06 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:46.202 11:18:06 -- host/identify.sh@14 -- # nvmftestinit 00:23:46.202 11:18:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:46.202 11:18:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.202 11:18:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:46.202 11:18:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:46.202 11:18:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:46.202 11:18:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.202 11:18:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.202 11:18:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.202 11:18:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:46.202 11:18:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:46.202 11:18:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:46.202 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:23:52.755 11:18:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:52.755 11:18:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:52.755 11:18:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:52.755 11:18:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:52.755 11:18:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:52.755 11:18:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:52.755 11:18:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:52.755 11:18:12 -- nvmf/common.sh@294 -- # net_devs=() 00:23:52.756 11:18:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:52.756 11:18:12 -- nvmf/common.sh@295 -- # e810=() 00:23:52.756 11:18:12 -- nvmf/common.sh@295 -- # local -ga e810 00:23:52.756 11:18:12 -- nvmf/common.sh@296 -- # x722=() 00:23:52.756 11:18:12 -- nvmf/common.sh@296 -- # local -ga x722 00:23:52.756 11:18:12 -- nvmf/common.sh@297 -- # mlx=() 00:23:52.756 11:18:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:52.756 11:18:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.756 11:18:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:52.756 11:18:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:52.756 11:18:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:52.756 11:18:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:52.756 11:18:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:52.756 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:52.756 11:18:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:52.756 11:18:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:52.756 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:52.756 11:18:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:52.756 11:18:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.756 11:18:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.756 11:18:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:52.756 Found net devices under 0000:18:00.0: mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.756 11:18:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.756 11:18:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.756 11:18:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:52.756 Found net devices under 0000:18:00.1: mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.756 11:18:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:52.756 11:18:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:52.756 11:18:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:52.756 11:18:12 -- nvmf/common.sh@57 -- # uname 00:23:52.756 11:18:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:52.756 11:18:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:52.756 11:18:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:52.756 11:18:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:52.756 11:18:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:52.756 11:18:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:52.756 11:18:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:52.756 11:18:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:52.756 11:18:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:52.756 11:18:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:52.756 11:18:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:52.756 11:18:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:52.756 11:18:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:52.756 11:18:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:52.756 11:18:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:52.756 11:18:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@104 -- # continue 2 00:23:52.756 11:18:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@104 -- # continue 2 00:23:52.756 11:18:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:52.756 11:18:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.756 11:18:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:52.756 11:18:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:52.756 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:52.756 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:52.756 altname enp24s0f0np0 00:23:52.756 altname ens785f0np0 00:23:52.756 inet 192.168.100.8/24 scope global mlx_0_0 00:23:52.756 valid_lft forever preferred_lft forever 00:23:52.756 11:18:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:52.756 11:18:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.756 11:18:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:52.756 11:18:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:52.756 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:52.756 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:52.756 altname enp24s0f1np1 00:23:52.756 altname ens785f1np1 00:23:52.756 inet 192.168.100.9/24 scope global mlx_0_1 00:23:52.756 valid_lft forever preferred_lft forever 00:23:52.756 11:18:12 -- nvmf/common.sh@410 -- # return 0 00:23:52.756 11:18:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:52.756 11:18:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:52.756 11:18:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:52.756 11:18:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:52.756 11:18:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:52.756 11:18:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:52.756 11:18:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:52.756 11:18:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:52.756 11:18:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:52.756 11:18:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@104 -- # continue 2 00:23:52.756 11:18:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:52.756 11:18:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:52.756 11:18:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@104 -- # continue 2 00:23:52.756 11:18:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:52.756 11:18:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.756 11:18:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:52.756 11:18:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:52.756 11:18:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:52.757 11:18:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:52.757 192.168.100.9' 00:23:52.757 11:18:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:52.757 192.168.100.9' 00:23:52.757 11:18:12 -- nvmf/common.sh@445 -- # head -n 1 00:23:52.757 11:18:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:52.757 11:18:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:52.757 192.168.100.9' 00:23:52.757 11:18:12 -- nvmf/common.sh@446 -- # tail -n +2 00:23:52.757 11:18:12 -- nvmf/common.sh@446 -- # head -n 1 00:23:52.757 11:18:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:52.757 11:18:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:52.757 11:18:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:52.757 11:18:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:52.757 11:18:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:52.757 11:18:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:52.757 11:18:12 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:52.757 11:18:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.757 11:18:12 -- common/autotest_common.sh@10 -- # set +x 00:23:52.757 11:18:12 -- host/identify.sh@19 -- # nvmfpid=1722959 00:23:52.757 11:18:12 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.757 11:18:12 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.757 11:18:12 -- host/identify.sh@23 -- # waitforlisten 1722959 00:23:52.757 11:18:12 -- common/autotest_common.sh@829 -- # '[' -z 1722959 ']' 00:23:52.757 11:18:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.757 11:18:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.757 11:18:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.757 11:18:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.757 11:18:12 -- common/autotest_common.sh@10 -- # set +x 00:23:52.757 [2024-12-13 11:18:12.402186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:52.757 [2024-12-13 11:18:12.402226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.757 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.757 [2024-12-13 11:18:12.457522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.757 [2024-12-13 11:18:12.528616] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:52.757 [2024-12-13 11:18:12.528723] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.757 [2024-12-13 11:18:12.528731] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.757 [2024-12-13 11:18:12.528737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.757 [2024-12-13 11:18:12.528776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.757 [2024-12-13 11:18:12.528795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.757 [2024-12-13 11:18:12.528862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.757 [2024-12-13 11:18:12.528864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.757 11:18:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.757 11:18:13 -- common/autotest_common.sh@862 -- # return 0 00:23:52.757 11:18:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:52.757 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.757 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:52.757 [2024-12-13 11:18:13.214330] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20e5960/0x20e9e50) succeed. 00:23:52.757 [2024-12-13 11:18:13.222486] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20e6f50/0x212b4f0) succeed. 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:53.019 11:18:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 11:18:13 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:53.019 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 Malloc0 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.019 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:53.019 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:53.019 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 [2024-12-13 11:18:13.415852] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:53.019 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:53.019 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.019 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.019 [2024-12-13 11:18:13.431572] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:53.019 [ 00:23:53.019 { 00:23:53.019 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.019 "subtype": "Discovery", 00:23:53.019 "listen_addresses": [ 00:23:53.019 { 00:23:53.019 "transport": "RDMA", 00:23:53.019 "trtype": "RDMA", 00:23:53.019 "adrfam": "IPv4", 00:23:53.019 "traddr": "192.168.100.8", 00:23:53.019 "trsvcid": "4420" 00:23:53.019 } 00:23:53.019 ], 00:23:53.019 "allow_any_host": true, 00:23:53.019 "hosts": [] 00:23:53.019 }, 00:23:53.019 { 00:23:53.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.019 "subtype": "NVMe", 00:23:53.019 "listen_addresses": [ 00:23:53.019 { 00:23:53.019 "transport": "RDMA", 00:23:53.019 "trtype": "RDMA", 00:23:53.019 "adrfam": "IPv4", 00:23:53.019 "traddr": "192.168.100.8", 00:23:53.019 "trsvcid": "4420" 00:23:53.019 } 00:23:53.019 ], 00:23:53.019 "allow_any_host": true, 00:23:53.019 "hosts": [], 00:23:53.019 "serial_number": "SPDK00000000000001", 00:23:53.019 "model_number": "SPDK bdev Controller", 00:23:53.019 "max_namespaces": 32, 00:23:53.019 "min_cntlid": 1, 00:23:53.019 "max_cntlid": 65519, 00:23:53.019 "namespaces": [ 00:23:53.019 { 00:23:53.019 "nsid": 1, 00:23:53.019 "bdev_name": "Malloc0", 00:23:53.019 "name": "Malloc0", 00:23:53.019 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:53.019 "eui64": "ABCDEF0123456789", 00:23:53.019 "uuid": "0a3cf8a6-d323-455c-978b-cf8e37397ddc" 00:23:53.019 } 00:23:53.019 ] 00:23:53.019 } 00:23:53.019 ] 00:23:53.019 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.019 11:18:13 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:53.019 [2024-12-13 11:18:13.466736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:53.019 [2024-12-13 11:18:13.466782] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723107 ] 00:23:53.019 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.019 [2024-12-13 11:18:13.507336] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:53.019 [2024-12-13 11:18:13.507400] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:53.019 [2024-12-13 11:18:13.507413] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:53.019 [2024-12-13 11:18:13.507416] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:53.019 [2024-12-13 11:18:13.507443] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:53.019 [2024-12-13 11:18:13.517746] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:53.019 [2024-12-13 11:18:13.527416] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:53.019 [2024-12-13 11:18:13.527426] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:53.019 [2024-12-13 11:18:13.527432] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.019 [2024-12-13 11:18:13.527437] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527441] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527445] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527449] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527453] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527457] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527461] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527465] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527469] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527472] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527476] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527480] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527484] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527488] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527492] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527496] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527500] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527504] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527508] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527512] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527515] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527519] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527523] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527527] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527531] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527535] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527539] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527543] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527547] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527553] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527557] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:53.020 [2024-12-13 11:18:13.527560] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:53.020 [2024-12-13 11:18:13.527563] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:53.020 [2024-12-13 11:18:13.527578] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.527588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183a00 00:23:53.020 [2024-12-13 11:18:13.533272] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533286] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533292] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:53.020 [2024-12-13 11:18:13.533297] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:53.020 [2024-12-13 11:18:13.533301] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:53.020 [2024-12-13 11:18:13.533311] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.020 [2024-12-13 11:18:13.533344] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533353] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:53.020 [2024-12-13 11:18:13.533357] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533361] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:53.020 [2024-12-13 11:18:13.533366] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.020 [2024-12-13 11:18:13.533389] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533397] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:53.020 [2024-12-13 11:18:13.533400] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:53.020 [2024-12-13 11:18:13.533411] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.020 [2024-12-13 11:18:13.533430] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533442] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:53.020 [2024-12-13 11:18:13.533447] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533452] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.020 [2024-12-13 11:18:13.533470] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533478] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:53.020 [2024-12-13 11:18:13.533482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:53.020 [2024-12-13 11:18:13.533486] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:53.020 [2024-12-13 11:18:13.533595] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:53.020 [2024-12-13 11:18:13.533599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:53.020 [2024-12-13 11:18:13.533605] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.020 [2024-12-13 11:18:13.533630] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:53.020 [2024-12-13 11:18:13.533642] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533648] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.020 [2024-12-13 11:18:13.533674] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.020 [2024-12-13 11:18:13.533678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:53.020 [2024-12-13 11:18:13.533681] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:53.020 [2024-12-13 11:18:13.533685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:53.020 [2024-12-13 11:18:13.533689] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533693] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:53.020 [2024-12-13 11:18:13.533701] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:53.020 [2024-12-13 11:18:13.533708] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.020 [2024-12-13 11:18:13.533713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:23:53.020 [2024-12-13 11:18:13.533748] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.533752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.533759] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:53.021 [2024-12-13 11:18:13.533763] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:53.021 [2024-12-13 11:18:13.533766] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:53.021 [2024-12-13 11:18:13.533771] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:53.021 [2024-12-13 11:18:13.533774] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:53.021 [2024-12-13 11:18:13.533778] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:53.021 [2024-12-13 11:18:13.533782] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533789] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:53.021 [2024-12-13 11:18:13.533795] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.021 [2024-12-13 11:18:13.533815] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.533819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.533825] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.021 [2024-12-13 11:18:13.533834] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.021 [2024-12-13 11:18:13.533844] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.021 [2024-12-13 11:18:13.533853] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.021 [2024-12-13 11:18:13.533861] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:53.021 [2024-12-13 11:18:13.533865] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533874] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:53.021 [2024-12-13 11:18:13.533879] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.021 [2024-12-13 11:18:13.533900] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.533904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.533908] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:53.021 [2024-12-13 11:18:13.533912] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:53.021 [2024-12-13 11:18:13.533916] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533922] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:23:53.021 [2024-12-13 11:18:13.533950] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.533953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.533958] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533965] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:53.021 [2024-12-13 11:18:13.533983] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183a00 00:23:53.021 [2024-12-13 11:18:13.533994] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.533999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.021 [2024-12-13 11:18:13.534016] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.534020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.534028] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.534033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183a00 00:23:53.021 [2024-12-13 11:18:13.534037] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.534042] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.534045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.534049] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.534063] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.534067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.534075] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.534080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183a00 00:23:53.021 [2024-12-13 11:18:13.534084] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.021 [2024-12-13 11:18:13.534098] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.021 [2024-12-13 11:18:13.534102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:53.021 [2024-12-13 11:18:13.534110] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.021 ===================================================== 00:23:53.021 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:53.021 ===================================================== 00:23:53.021 Controller Capabilities/Features 00:23:53.021 ================================ 00:23:53.021 Vendor ID: 0000 00:23:53.021 Subsystem Vendor ID: 0000 00:23:53.021 Serial Number: .................... 00:23:53.021 Model Number: ........................................ 00:23:53.021 Firmware Version: 24.01.1 00:23:53.021 Recommended Arb Burst: 0 00:23:53.021 IEEE OUI Identifier: 00 00 00 00:23:53.021 Multi-path I/O 00:23:53.021 May have multiple subsystem ports: No 00:23:53.021 May have multiple controllers: No 00:23:53.021 Associated with SR-IOV VF: No 00:23:53.021 Max Data Transfer Size: 131072 00:23:53.021 Max Number of Namespaces: 0 00:23:53.021 Max Number of I/O Queues: 1024 00:23:53.021 NVMe Specification Version (VS): 1.3 00:23:53.021 NVMe Specification Version (Identify): 1.3 00:23:53.021 Maximum Queue Entries: 128 00:23:53.021 Contiguous Queues Required: Yes 00:23:53.021 Arbitration Mechanisms Supported 00:23:53.021 Weighted Round Robin: Not Supported 00:23:53.021 Vendor Specific: Not Supported 00:23:53.021 Reset Timeout: 15000 ms 00:23:53.021 Doorbell Stride: 4 bytes 00:23:53.021 NVM Subsystem Reset: Not Supported 00:23:53.021 Command Sets Supported 00:23:53.021 NVM Command Set: Supported 00:23:53.021 Boot Partition: Not Supported 00:23:53.021 Memory Page Size Minimum: 4096 bytes 00:23:53.021 Memory Page Size Maximum: 4096 bytes 00:23:53.021 Persistent Memory Region: Not Supported 00:23:53.021 Optional Asynchronous Events Supported 00:23:53.021 Namespace Attribute Notices: Not Supported 00:23:53.021 Firmware Activation Notices: Not Supported 00:23:53.021 ANA Change Notices: Not Supported 00:23:53.021 PLE Aggregate Log Change Notices: Not Supported 00:23:53.021 LBA Status Info Alert Notices: Not Supported 00:23:53.021 EGE Aggregate Log Change Notices: Not Supported 00:23:53.021 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.021 Zone Descriptor Change Notices: Not Supported 00:23:53.021 Discovery Log Change Notices: Supported 00:23:53.021 Controller Attributes 00:23:53.021 128-bit Host Identifier: Not Supported 00:23:53.021 Non-Operational Permissive Mode: Not Supported 00:23:53.021 NVM Sets: Not Supported 00:23:53.021 Read Recovery Levels: Not Supported 00:23:53.021 Endurance Groups: Not Supported 00:23:53.021 Predictable Latency Mode: Not Supported 00:23:53.021 Traffic Based Keep ALive: Not Supported 00:23:53.021 Namespace Granularity: Not Supported 00:23:53.021 SQ Associations: Not Supported 00:23:53.021 UUID List: Not Supported 00:23:53.021 Multi-Domain Subsystem: Not Supported 00:23:53.021 Fixed Capacity Management: Not Supported 00:23:53.021 Variable Capacity Management: Not Supported 00:23:53.021 Delete Endurance Group: Not Supported 00:23:53.021 Delete NVM Set: Not Supported 00:23:53.022 Extended LBA Formats Supported: Not Supported 00:23:53.022 Flexible Data Placement Supported: Not Supported 00:23:53.022 00:23:53.022 Controller Memory Buffer Support 00:23:53.022 ================================ 00:23:53.022 Supported: No 00:23:53.022 00:23:53.022 Persistent Memory Region Support 00:23:53.022 ================================ 00:23:53.022 Supported: No 00:23:53.022 00:23:53.022 Admin Command Set Attributes 00:23:53.022 ============================ 00:23:53.022 Security Send/Receive: Not Supported 00:23:53.022 Format NVM: Not Supported 00:23:53.022 Firmware Activate/Download: Not Supported 00:23:53.022 Namespace Management: Not Supported 00:23:53.022 Device Self-Test: Not Supported 00:23:53.022 Directives: Not Supported 00:23:53.022 NVMe-MI: Not Supported 00:23:53.022 Virtualization Management: Not Supported 00:23:53.022 Doorbell Buffer Config: Not Supported 00:23:53.022 Get LBA Status Capability: Not Supported 00:23:53.022 Command & Feature Lockdown Capability: Not Supported 00:23:53.022 Abort Command Limit: 1 00:23:53.022 Async Event Request Limit: 4 00:23:53.022 Number of Firmware Slots: N/A 00:23:53.022 Firmware Slot 1 Read-Only: N/A 00:23:53.022 Firmware Activation Without Reset: N/A 00:23:53.022 Multiple Update Detection Support: N/A 00:23:53.022 Firmware Update Granularity: No Information Provided 00:23:53.022 Per-Namespace SMART Log: No 00:23:53.022 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.022 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:53.022 Command Effects Log Page: Not Supported 00:23:53.022 Get Log Page Extended Data: Supported 00:23:53.022 Telemetry Log Pages: Not Supported 00:23:53.022 Persistent Event Log Pages: Not Supported 00:23:53.022 Supported Log Pages Log Page: May Support 00:23:53.022 Commands Supported & Effects Log Page: Not Supported 00:23:53.022 Feature Identifiers & Effects Log Page:May Support 00:23:53.022 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.022 Data Area 4 for Telemetry Log: Not Supported 00:23:53.022 Error Log Page Entries Supported: 128 00:23:53.022 Keep Alive: Not Supported 00:23:53.022 00:23:53.022 NVM Command Set Attributes 00:23:53.022 ========================== 00:23:53.022 Submission Queue Entry Size 00:23:53.022 Max: 1 00:23:53.022 Min: 1 00:23:53.022 Completion Queue Entry Size 00:23:53.022 Max: 1 00:23:53.022 Min: 1 00:23:53.022 Number of Namespaces: 0 00:23:53.022 Compare Command: Not Supported 00:23:53.022 Write Uncorrectable Command: Not Supported 00:23:53.022 Dataset Management Command: Not Supported 00:23:53.022 Write Zeroes Command: Not Supported 00:23:53.022 Set Features Save Field: Not Supported 00:23:53.022 Reservations: Not Supported 00:23:53.022 Timestamp: Not Supported 00:23:53.022 Copy: Not Supported 00:23:53.022 Volatile Write Cache: Not Present 00:23:53.022 Atomic Write Unit (Normal): 1 00:23:53.022 Atomic Write Unit (PFail): 1 00:23:53.022 Atomic Compare & Write Unit: 1 00:23:53.022 Fused Compare & Write: Supported 00:23:53.022 Scatter-Gather List 00:23:53.022 SGL Command Set: Supported 00:23:53.022 SGL Keyed: Supported 00:23:53.022 SGL Bit Bucket Descriptor: Not Supported 00:23:53.022 SGL Metadata Pointer: Not Supported 00:23:53.022 Oversized SGL: Not Supported 00:23:53.022 SGL Metadata Address: Not Supported 00:23:53.022 SGL Offset: Supported 00:23:53.022 Transport SGL Data Block: Not Supported 00:23:53.022 Replay Protected Memory Block: Not Supported 00:23:53.022 00:23:53.022 Firmware Slot Information 00:23:53.022 ========================= 00:23:53.022 Active slot: 0 00:23:53.022 00:23:53.022 00:23:53.022 Error Log 00:23:53.022 ========= 00:23:53.022 00:23:53.022 Active Namespaces 00:23:53.022 ================= 00:23:53.022 Discovery Log Page 00:23:53.022 ================== 00:23:53.022 Generation Counter: 2 00:23:53.022 Number of Records: 2 00:23:53.022 Record Format: 0 00:23:53.022 00:23:53.022 Discovery Log Entry 0 00:23:53.022 ---------------------- 00:23:53.022 Transport Type: 1 (RDMA) 00:23:53.022 Address Family: 1 (IPv4) 00:23:53.022 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:53.022 Entry Flags: 00:23:53.022 Duplicate Returned Information: 1 00:23:53.022 Explicit Persistent Connection Support for Discovery: 1 00:23:53.022 Transport Requirements: 00:23:53.022 Secure Channel: Not Required 00:23:53.022 Port ID: 0 (0x0000) 00:23:53.022 Controller ID: 65535 (0xffff) 00:23:53.022 Admin Max SQ Size: 128 00:23:53.022 Transport Service Identifier: 4420 00:23:53.022 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:53.022 Transport Address: 192.168.100.8 00:23:53.022 Transport Specific Address Subtype - RDMA 00:23:53.022 RDMA QP Service Type: 1 (Reliable Connected) 00:23:53.022 RDMA Provider Type: 1 (No provider specified) 00:23:53.022 RDMA CM Service: 1 (RDMA_CM) 00:23:53.022 Discovery Log Entry 1 00:23:53.022 ---------------------- 00:23:53.022 Transport Type: 1 (RDMA) 00:23:53.022 Address Family: 1 (IPv4) 00:23:53.022 Subsystem Type: 2 (NVM Subsystem) 00:23:53.022 Entry Flags: 00:23:53.022 Duplicate Returned Information: 0 00:23:53.022 Explicit Persistent Connection Support for Discovery: 0 00:23:53.022 Transport Requirements: 00:23:53.022 Secure Channel: Not Required 00:23:53.022 Port ID: 0 (0x0000) 00:23:53.022 Controller ID: 65535 (0xffff) 00:23:53.022 Admin Max SQ Size: [2024-12-13 11:18:13.534172] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:53.022 [2024-12-13 11:18:13.534179] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43888 doesn't match qid 00:23:53.022 [2024-12-13 11:18:13.534191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32551 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534195] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43888 doesn't match qid 00:23:53.022 [2024-12-13 11:18:13.534201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32551 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534205] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43888 doesn't match qid 00:23:53.022 [2024-12-13 11:18:13.534211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32551 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534215] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43888 doesn't match qid 00:23:53.022 [2024-12-13 11:18:13.534221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32551 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534227] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.022 [2024-12-13 11:18:13.534248] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.022 [2024-12-13 11:18:13.534252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534258] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.022 [2024-12-13 11:18:13.534272] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534287] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.022 [2024-12-13 11:18:13.534291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534295] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:53.022 [2024-12-13 11:18:13.534299] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:53.022 [2024-12-13 11:18:13.534303] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534309] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.022 [2024-12-13 11:18:13.534328] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.022 [2024-12-13 11:18:13.534332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534337] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534343] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.022 [2024-12-13 11:18:13.534368] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.022 [2024-12-13 11:18:13.534372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:53.022 [2024-12-13 11:18:13.534376] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534383] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.022 [2024-12-13 11:18:13.534388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.022 [2024-12-13 11:18:13.534407] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.022 [2024-12-13 11:18:13.534411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534416] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534422] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534449] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534460] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534466] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534484] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534493] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534499] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534518] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534527] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534534] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534556] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534571] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534595] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534604] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534610] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534638] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534647] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534653] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534672] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534680] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534687] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534713] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534722] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534728] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534748] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534757] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534763] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534788] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534796] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534802] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534827] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534835] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534841] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534866] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534875] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534881] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534902] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534910] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534916] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534936] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534945] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534952] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.534975] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.534979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.534983] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534991] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.534996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.535012] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.535016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:53.023 [2024-12-13 11:18:13.535021] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.535027] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.023 [2024-12-13 11:18:13.535033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.023 [2024-12-13 11:18:13.535054] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.023 [2024-12-13 11:18:13.535057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535062] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535068] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535086] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535094] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535100] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535122] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535131] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535137] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535157] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535166] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535172] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535199] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535207] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535217] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535242] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535252] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535259] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535284] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535292] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535298] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535317] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535326] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535332] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535351] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535359] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535365] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535386] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535395] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535402] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535425] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535434] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535442] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535462] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535471] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535477] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535502] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535510] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535516] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535542] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535550] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535557] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535579] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535588] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535595] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535617] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535626] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535632] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535650] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:53.024 [2024-12-13 11:18:13.535662] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535668] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.024 [2024-12-13 11:18:13.535674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.024 [2024-12-13 11:18:13.535695] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.024 [2024-12-13 11:18:13.535699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535703] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535709] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535729] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535737] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535743] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535770] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535780] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535788] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535813] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535821] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535828] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535852] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535860] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535866] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535893] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535903] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535910] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535930] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535938] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.535966] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.535970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.535974] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535980] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.535986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536006] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536014] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536021] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536042] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536050] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536057] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536081] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536089] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536095] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536121] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536131] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536137] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536163] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536171] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536177] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536203] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536211] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536218] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536238] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536246] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536252] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536273] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536282] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536289] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536316] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536324] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536331] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536356] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536364] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536371] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.025 [2024-12-13 11:18:13.536398] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.025 [2024-12-13 11:18:13.536402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:53.025 [2024-12-13 11:18:13.536406] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.025 [2024-12-13 11:18:13.536413] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536437] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536446] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536452] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536478] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536486] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536492] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536515] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536523] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536530] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536556] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536570] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536593] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536602] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536608] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536634] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536642] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536648] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536671] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536679] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536686] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536710] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536718] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536724] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536751] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536760] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536766] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536784] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536793] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536799] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536820] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536828] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536835] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536862] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536871] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536877] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536900] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536908] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536914] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536940] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536948] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536955] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.536983] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.536987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.536991] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.536998] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.537017] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.537021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.537025] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537032] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.537059] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.537063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.537067] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537074] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.537098] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.026 [2024-12-13 11:18:13.537102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:53.026 [2024-12-13 11:18:13.537106] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537113] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.026 [2024-12-13 11:18:13.537118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.026 [2024-12-13 11:18:13.537137] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.027 [2024-12-13 11:18:13.537141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:53.027 [2024-12-13 11:18:13.537145] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.537151] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.537157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.027 [2024-12-13 11:18:13.537176] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.027 [2024-12-13 11:18:13.537180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:53.027 [2024-12-13 11:18:13.537184] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.537190] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.537196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.027 [2024-12-13 11:18:13.537210] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.027 [2024-12-13 11:18:13.537214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:53.027 [2024-12-13 11:18:13.537219] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.537225] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.537230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.027 [2024-12-13 11:18:13.537252] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.027 [2024-12-13 11:18:13.537256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:53.027 [2024-12-13 11:18:13.537260] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.541276] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.541283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.027 [2024-12-13 11:18:13.541299] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.027 [2024-12-13 11:18:13.541303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001f p:0 m:0 dnr:0 00:23:53.027 [2024-12-13 11:18:13.541307] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.027 [2024-12-13 11:18:13.541312] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:53.027 128 00:23:53.027 Transport Service Identifier: 4420 00:23:53.027 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:53.027 Transport Address: 192.168.100.8 00:23:53.027 Transport Specific Address Subtype - RDMA 00:23:53.027 RDMA QP Service Type: 1 (Reliable Connected) 00:23:53.027 RDMA Provider Type: 1 (No provider specified) 00:23:53.027 RDMA CM Service: 1 (RDMA_CM) 00:23:53.027 11:18:13 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:53.288 [2024-12-13 11:18:13.606988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:53.289 [2024-12-13 11:18:13.607031] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723117 ] 00:23:53.289 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.289 [2024-12-13 11:18:13.649446] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:53.289 [2024-12-13 11:18:13.649502] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:53.289 [2024-12-13 11:18:13.649521] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:53.289 [2024-12-13 11:18:13.649524] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:53.289 [2024-12-13 11:18:13.649545] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:53.289 [2024-12-13 11:18:13.663786] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:53.289 [2024-12-13 11:18:13.677058] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:53.289 [2024-12-13 11:18:13.677067] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:53.289 [2024-12-13 11:18:13.677073] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677078] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677082] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677086] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677090] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677094] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677098] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677104] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677108] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677112] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677116] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677120] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677124] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677128] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677132] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677136] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677139] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677144] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677147] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677151] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677155] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677159] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677163] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677167] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677171] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677175] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677178] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677182] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677186] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677190] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677194] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677197] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:53.289 [2024-12-13 11:18:13.677201] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:53.289 [2024-12-13 11:18:13.677204] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:53.289 [2024-12-13 11:18:13.677216] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.677225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183a00 00:23:53.289 [2024-12-13 11:18:13.683273] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:53.289 [2024-12-13 11:18:13.683286] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683293] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:53.289 [2024-12-13 11:18:13.683297] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:53.289 [2024-12-13 11:18:13.683302] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:53.289 [2024-12-13 11:18:13.683310] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.289 [2024-12-13 11:18:13.683337] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:53.289 [2024-12-13 11:18:13.683346] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:53.289 [2024-12-13 11:18:13.683350] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683354] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:53.289 [2024-12-13 11:18:13.683359] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.289 [2024-12-13 11:18:13.683384] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:53.289 [2024-12-13 11:18:13.683392] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:53.289 [2024-12-13 11:18:13.683396] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:53.289 [2024-12-13 11:18:13.683406] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.289 [2024-12-13 11:18:13.683435] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:53.289 [2024-12-13 11:18:13.683443] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:53.289 [2024-12-13 11:18:13.683447] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683453] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.289 [2024-12-13 11:18:13.683471] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:53.289 [2024-12-13 11:18:13.683479] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:53.289 [2024-12-13 11:18:13.683485] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:53.289 [2024-12-13 11:18:13.683488] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683493] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:53.289 [2024-12-13 11:18:13.683597] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:53.289 [2024-12-13 11:18:13.683600] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:53.289 [2024-12-13 11:18:13.683606] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.289 [2024-12-13 11:18:13.683629] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:53.289 [2024-12-13 11:18:13.683638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:53.289 [2024-12-13 11:18:13.683641] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683647] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.289 [2024-12-13 11:18:13.683653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.289 [2024-12-13 11:18:13.683669] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.289 [2024-12-13 11:18:13.683673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.683677] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:53.290 [2024-12-13 11:18:13.683680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683684] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683689] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:53.290 [2024-12-13 11:18:13.683694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683701] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:23:53.290 [2024-12-13 11:18:13.683740] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.683744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.683750] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:53.290 [2024-12-13 11:18:13.683753] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:53.290 [2024-12-13 11:18:13.683757] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:53.290 [2024-12-13 11:18:13.683760] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:53.290 [2024-12-13 11:18:13.683768] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:53.290 [2024-12-13 11:18:13.683772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683776] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683782] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683788] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.290 [2024-12-13 11:18:13.683812] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.683816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.683822] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.290 [2024-12-13 11:18:13.683831] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.290 [2024-12-13 11:18:13.683841] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.290 [2024-12-13 11:18:13.683850] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.290 [2024-12-13 11:18:13.683859] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683862] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683869] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683875] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.290 [2024-12-13 11:18:13.683895] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.683899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.683903] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:53.290 [2024-12-13 11:18:13.683907] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683910] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683923] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.683928] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.683933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.290 [2024-12-13 11:18:13.683949] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.683953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.683997] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684001] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684013] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183a00 00:23:53.290 [2024-12-13 11:18:13.684039] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.684044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.684052] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:53.290 [2024-12-13 11:18:13.684063] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684073] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684079] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:23:53.290 [2024-12-13 11:18:13.684109] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.684113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.684121] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684125] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684137] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:23:53.290 [2024-12-13 11:18:13.684164] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.684168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.684175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684179] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684183] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684201] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:53.290 [2024-12-13 11:18:13.684205] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:53.290 [2024-12-13 11:18:13.684209] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:53.290 [2024-12-13 11:18:13.684220] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.290 [2024-12-13 11:18:13.684231] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.290 [2024-12-13 11:18:13.684247] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.290 [2024-12-13 11:18:13.684251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:53.290 [2024-12-13 11:18:13.684255] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684261] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.290 [2024-12-13 11:18:13.684270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.291 [2024-12-13 11:18:13.684276] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684284] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684293] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684301] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.291 [2024-12-13 11:18:13.684325] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684335] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684340] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.291 [2024-12-13 11:18:13.684362] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684370] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684378] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183a00 00:23:53.291 [2024-12-13 11:18:13.684389] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183a00 00:23:53.291 [2024-12-13 11:18:13.684400] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183a00 00:23:53.291 [2024-12-13 11:18:13.684412] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183a00 00:23:53.291 [2024-12-13 11:18:13.684423] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684435] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684450] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684461] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684465] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684473] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.291 [2024-12-13 11:18:13.684478] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.291 [2024-12-13 11:18:13.684481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:53.291 [2024-12-13 11:18:13.684488] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.291 ===================================================== 00:23:53.291 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.291 ===================================================== 00:23:53.291 Controller Capabilities/Features 00:23:53.291 ================================ 00:23:53.291 Vendor ID: 8086 00:23:53.291 Subsystem Vendor ID: 8086 00:23:53.291 Serial Number: SPDK00000000000001 00:23:53.291 Model Number: SPDK bdev Controller 00:23:53.291 Firmware Version: 24.01.1 00:23:53.291 Recommended Arb Burst: 6 00:23:53.291 IEEE OUI Identifier: e4 d2 5c 00:23:53.291 Multi-path I/O 00:23:53.291 May have multiple subsystem ports: Yes 00:23:53.291 May have multiple controllers: Yes 00:23:53.291 Associated with SR-IOV VF: No 00:23:53.291 Max Data Transfer Size: 131072 00:23:53.291 Max Number of Namespaces: 32 00:23:53.291 Max Number of I/O Queues: 127 00:23:53.291 NVMe Specification Version (VS): 1.3 00:23:53.291 NVMe Specification Version (Identify): 1.3 00:23:53.291 Maximum Queue Entries: 128 00:23:53.291 Contiguous Queues Required: Yes 00:23:53.291 Arbitration Mechanisms Supported 00:23:53.291 Weighted Round Robin: Not Supported 00:23:53.291 Vendor Specific: Not Supported 00:23:53.291 Reset Timeout: 15000 ms 00:23:53.291 Doorbell Stride: 4 bytes 00:23:53.291 NVM Subsystem Reset: Not Supported 00:23:53.291 Command Sets Supported 00:23:53.291 NVM Command Set: Supported 00:23:53.291 Boot Partition: Not Supported 00:23:53.291 Memory Page Size Minimum: 4096 bytes 00:23:53.291 Memory Page Size Maximum: 4096 bytes 00:23:53.291 Persistent Memory Region: Not Supported 00:23:53.291 Optional Asynchronous Events Supported 00:23:53.291 Namespace Attribute Notices: Supported 00:23:53.291 Firmware Activation Notices: Not Supported 00:23:53.291 ANA Change Notices: Not Supported 00:23:53.291 PLE Aggregate Log Change Notices: Not Supported 00:23:53.291 LBA Status Info Alert Notices: Not Supported 00:23:53.291 EGE Aggregate Log Change Notices: Not Supported 00:23:53.291 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.291 Zone Descriptor Change Notices: Not Supported 00:23:53.291 Discovery Log Change Notices: Not Supported 00:23:53.291 Controller Attributes 00:23:53.291 128-bit Host Identifier: Supported 00:23:53.291 Non-Operational Permissive Mode: Not Supported 00:23:53.291 NVM Sets: Not Supported 00:23:53.291 Read Recovery Levels: Not Supported 00:23:53.291 Endurance Groups: Not Supported 00:23:53.291 Predictable Latency Mode: Not Supported 00:23:53.291 Traffic Based Keep ALive: Not Supported 00:23:53.291 Namespace Granularity: Not Supported 00:23:53.291 SQ Associations: Not Supported 00:23:53.291 UUID List: Not Supported 00:23:53.291 Multi-Domain Subsystem: Not Supported 00:23:53.291 Fixed Capacity Management: Not Supported 00:23:53.291 Variable Capacity Management: Not Supported 00:23:53.291 Delete Endurance Group: Not Supported 00:23:53.291 Delete NVM Set: Not Supported 00:23:53.291 Extended LBA Formats Supported: Not Supported 00:23:53.291 Flexible Data Placement Supported: Not Supported 00:23:53.291 00:23:53.291 Controller Memory Buffer Support 00:23:53.291 ================================ 00:23:53.291 Supported: No 00:23:53.291 00:23:53.291 Persistent Memory Region Support 00:23:53.291 ================================ 00:23:53.291 Supported: No 00:23:53.291 00:23:53.291 Admin Command Set Attributes 00:23:53.291 ============================ 00:23:53.291 Security Send/Receive: Not Supported 00:23:53.291 Format NVM: Not Supported 00:23:53.291 Firmware Activate/Download: Not Supported 00:23:53.291 Namespace Management: Not Supported 00:23:53.291 Device Self-Test: Not Supported 00:23:53.291 Directives: Not Supported 00:23:53.291 NVMe-MI: Not Supported 00:23:53.291 Virtualization Management: Not Supported 00:23:53.291 Doorbell Buffer Config: Not Supported 00:23:53.291 Get LBA Status Capability: Not Supported 00:23:53.291 Command & Feature Lockdown Capability: Not Supported 00:23:53.291 Abort Command Limit: 4 00:23:53.291 Async Event Request Limit: 4 00:23:53.291 Number of Firmware Slots: N/A 00:23:53.291 Firmware Slot 1 Read-Only: N/A 00:23:53.291 Firmware Activation Without Reset: N/A 00:23:53.291 Multiple Update Detection Support: N/A 00:23:53.291 Firmware Update Granularity: No Information Provided 00:23:53.291 Per-Namespace SMART Log: No 00:23:53.291 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.291 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:53.291 Command Effects Log Page: Supported 00:23:53.291 Get Log Page Extended Data: Supported 00:23:53.291 Telemetry Log Pages: Not Supported 00:23:53.291 Persistent Event Log Pages: Not Supported 00:23:53.291 Supported Log Pages Log Page: May Support 00:23:53.291 Commands Supported & Effects Log Page: Not Supported 00:23:53.291 Feature Identifiers & Effects Log Page:May Support 00:23:53.291 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.291 Data Area 4 for Telemetry Log: Not Supported 00:23:53.291 Error Log Page Entries Supported: 128 00:23:53.291 Keep Alive: Supported 00:23:53.291 Keep Alive Granularity: 10000 ms 00:23:53.291 00:23:53.291 NVM Command Set Attributes 00:23:53.291 ========================== 00:23:53.291 Submission Queue Entry Size 00:23:53.291 Max: 64 00:23:53.291 Min: 64 00:23:53.291 Completion Queue Entry Size 00:23:53.291 Max: 16 00:23:53.291 Min: 16 00:23:53.291 Number of Namespaces: 32 00:23:53.291 Compare Command: Supported 00:23:53.291 Write Uncorrectable Command: Not Supported 00:23:53.291 Dataset Management Command: Supported 00:23:53.291 Write Zeroes Command: Supported 00:23:53.292 Set Features Save Field: Not Supported 00:23:53.292 Reservations: Supported 00:23:53.292 Timestamp: Not Supported 00:23:53.292 Copy: Supported 00:23:53.292 Volatile Write Cache: Present 00:23:53.292 Atomic Write Unit (Normal): 1 00:23:53.292 Atomic Write Unit (PFail): 1 00:23:53.292 Atomic Compare & Write Unit: 1 00:23:53.292 Fused Compare & Write: Supported 00:23:53.292 Scatter-Gather List 00:23:53.292 SGL Command Set: Supported 00:23:53.292 SGL Keyed: Supported 00:23:53.292 SGL Bit Bucket Descriptor: Not Supported 00:23:53.292 SGL Metadata Pointer: Not Supported 00:23:53.292 Oversized SGL: Not Supported 00:23:53.292 SGL Metadata Address: Not Supported 00:23:53.292 SGL Offset: Supported 00:23:53.292 Transport SGL Data Block: Not Supported 00:23:53.292 Replay Protected Memory Block: Not Supported 00:23:53.292 00:23:53.292 Firmware Slot Information 00:23:53.292 ========================= 00:23:53.292 Active slot: 1 00:23:53.292 Slot 1 Firmware Revision: 24.01.1 00:23:53.292 00:23:53.292 00:23:53.292 Commands Supported and Effects 00:23:53.292 ============================== 00:23:53.292 Admin Commands 00:23:53.292 -------------- 00:23:53.292 Get Log Page (02h): Supported 00:23:53.292 Identify (06h): Supported 00:23:53.292 Abort (08h): Supported 00:23:53.292 Set Features (09h): Supported 00:23:53.292 Get Features (0Ah): Supported 00:23:53.292 Asynchronous Event Request (0Ch): Supported 00:23:53.292 Keep Alive (18h): Supported 00:23:53.292 I/O Commands 00:23:53.292 ------------ 00:23:53.292 Flush (00h): Supported LBA-Change 00:23:53.292 Write (01h): Supported LBA-Change 00:23:53.292 Read (02h): Supported 00:23:53.292 Compare (05h): Supported 00:23:53.292 Write Zeroes (08h): Supported LBA-Change 00:23:53.292 Dataset Management (09h): Supported LBA-Change 00:23:53.292 Copy (19h): Supported LBA-Change 00:23:53.292 Unknown (79h): Supported LBA-Change 00:23:53.292 Unknown (7Ah): Supported 00:23:53.292 00:23:53.292 Error Log 00:23:53.292 ========= 00:23:53.292 00:23:53.292 Arbitration 00:23:53.292 =========== 00:23:53.292 Arbitration Burst: 1 00:23:53.292 00:23:53.292 Power Management 00:23:53.292 ================ 00:23:53.292 Number of Power States: 1 00:23:53.292 Current Power State: Power State #0 00:23:53.292 Power State #0: 00:23:53.292 Max Power: 0.00 W 00:23:53.292 Non-Operational State: Operational 00:23:53.292 Entry Latency: Not Reported 00:23:53.292 Exit Latency: Not Reported 00:23:53.292 Relative Read Throughput: 0 00:23:53.292 Relative Read Latency: 0 00:23:53.292 Relative Write Throughput: 0 00:23:53.292 Relative Write Latency: 0 00:23:53.292 Idle Power: Not Reported 00:23:53.292 Active Power: Not Reported 00:23:53.292 Non-Operational Permissive Mode: Not Supported 00:23:53.292 00:23:53.292 Health Information 00:23:53.292 ================== 00:23:53.292 Critical Warnings: 00:23:53.292 Available Spare Space: OK 00:23:53.292 Temperature: OK 00:23:53.292 Device Reliability: OK 00:23:53.292 Read Only: No 00:23:53.292 Volatile Memory Backup: OK 00:23:53.292 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:53.292 Temperature Threshol[2024-12-13 11:18:13.684562] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684586] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684594] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684614] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:53.292 [2024-12-13 11:18:13.684621] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4715 doesn't match qid 00:23:53.292 [2024-12-13 11:18:13.684632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684637] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4715 doesn't match qid 00:23:53.292 [2024-12-13 11:18:13.684643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684647] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4715 doesn't match qid 00:23:53.292 [2024-12-13 11:18:13.684652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684657] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4715 doesn't match qid 00:23:53.292 [2024-12-13 11:18:13.684662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32660 cdw0:5 sqhd:9e28 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684668] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684693] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684703] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684712] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684733] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684740] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:53.292 [2024-12-13 11:18:13.684744] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:53.292 [2024-12-13 11:18:13.684749] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684755] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684777] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684788] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684795] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684819] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684828] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684835] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684863] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684871] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684877] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684905] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684914] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684920] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684943] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684952] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684959] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.684981] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.684985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:53.292 [2024-12-13 11:18:13.684990] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.684996] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.292 [2024-12-13 11:18:13.685003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.292 [2024-12-13 11:18:13.685018] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.292 [2024-12-13 11:18:13.685022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685028] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685034] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685052] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685060] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685067] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685093] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685102] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685108] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685135] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685143] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685149] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685169] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685176] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685183] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685210] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685217] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685250] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685260] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685270] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685291] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685299] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685306] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685327] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685335] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685341] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685362] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685370] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685376] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685400] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685408] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685414] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685441] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685449] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685455] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685485] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685496] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685502] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685523] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685531] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685537] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685559] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685567] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685574] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.293 [2024-12-13 11:18:13.685595] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.293 [2024-12-13 11:18:13.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:53.293 [2024-12-13 11:18:13.685602] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685609] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.293 [2024-12-13 11:18:13.685614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685628] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685636] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685642] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685662] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685670] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685676] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685694] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685701] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685707] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685727] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685735] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685741] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685761] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685769] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685775] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685799] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685807] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685813] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685834] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685841] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685848] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685870] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685878] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685884] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685905] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685913] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685919] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685940] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685948] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685954] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.685976] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.685980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.685984] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685990] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.685996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686019] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686027] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686061] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686069] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686075] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686101] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686109] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686115] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686133] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686140] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686147] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686172] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686180] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686186] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686207] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686215] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686221] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686243] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686251] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686257] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.294 [2024-12-13 11:18:13.686263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.294 [2024-12-13 11:18:13.686285] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.294 [2024-12-13 11:18:13.686289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:53.294 [2024-12-13 11:18:13.686293] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686300] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686317] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686325] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686332] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686358] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686366] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686372] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686391] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686399] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686405] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686433] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686441] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686447] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686467] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686474] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686481] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686500] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686508] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686514] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686535] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686543] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686549] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686575] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686583] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686589] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686612] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686620] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686626] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686655] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686662] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686668] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686691] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686699] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686705] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686729] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686737] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686743] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686761] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686769] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686777] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686796] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686804] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686810] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686830] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686838] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686844] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686865] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686873] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686879] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686901] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686909] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686915] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686941] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.295 [2024-12-13 11:18:13.686945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:53.295 [2024-12-13 11:18:13.686948] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686955] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.295 [2024-12-13 11:18:13.686960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.295 [2024-12-13 11:18:13.686974] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.686978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.686982] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.686989] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.686995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687018] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687026] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687032] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687059] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687067] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687073] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687091] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687099] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687105] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687126] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687134] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687140] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687167] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687175] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687181] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687207] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687216] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687222] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.687249] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.687253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.687257] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.687263] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.691276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:53.296 [2024-12-13 11:18:13.691293] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:53.296 [2024-12-13 11:18:13.691297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001e p:0 m:0 dnr:0 00:23:53.296 [2024-12-13 11:18:13.691301] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183a00 00:23:53.296 [2024-12-13 11:18:13.691306] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:53.296 d: 0 Kelvin (-273 Celsius) 00:23:53.296 Available Spare: 0% 00:23:53.296 Available Spare Threshold: 0% 00:23:53.296 Life Percentage Used: 0% 00:23:53.296 Data Units Read: 0 00:23:53.296 Data Units Written: 0 00:23:53.296 Host Read Commands: 0 00:23:53.296 Host Write Commands: 0 00:23:53.296 Controller Busy Time: 0 minutes 00:23:53.296 Power Cycles: 0 00:23:53.296 Power On Hours: 0 hours 00:23:53.296 Unsafe Shutdowns: 0 00:23:53.296 Unrecoverable Media Errors: 0 00:23:53.296 Lifetime Error Log Entries: 0 00:23:53.296 Warning Temperature Time: 0 minutes 00:23:53.296 Critical Temperature Time: 0 minutes 00:23:53.296 00:23:53.296 Number of Queues 00:23:53.296 ================ 00:23:53.296 Number of I/O Submission Queues: 127 00:23:53.296 Number of I/O Completion Queues: 127 00:23:53.296 00:23:53.296 Active Namespaces 00:23:53.296 ================= 00:23:53.296 Namespace ID:1 00:23:53.296 Error Recovery Timeout: Unlimited 00:23:53.296 Command Set Identifier: NVM (00h) 00:23:53.296 Deallocate: Supported 00:23:53.296 Deallocated/Unwritten Error: Not Supported 00:23:53.296 Deallocated Read Value: Unknown 00:23:53.296 Deallocate in Write Zeroes: Not Supported 00:23:53.296 Deallocated Guard Field: 0xFFFF 00:23:53.296 Flush: Supported 00:23:53.296 Reservation: Supported 00:23:53.296 Namespace Sharing Capabilities: Multiple Controllers 00:23:53.296 Size (in LBAs): 131072 (0GiB) 00:23:53.296 Capacity (in LBAs): 131072 (0GiB) 00:23:53.296 Utilization (in LBAs): 131072 (0GiB) 00:23:53.296 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:53.296 EUI64: ABCDEF0123456789 00:23:53.296 UUID: 0a3cf8a6-d323-455c-978b-cf8e37397ddc 00:23:53.296 Thin Provisioning: Not Supported 00:23:53.296 Per-NS Atomic Units: Yes 00:23:53.296 Atomic Boundary Size (Normal): 0 00:23:53.296 Atomic Boundary Size (PFail): 0 00:23:53.296 Atomic Boundary Offset: 0 00:23:53.296 Maximum Single Source Range Length: 65535 00:23:53.296 Maximum Copy Length: 65535 00:23:53.296 Maximum Source Range Count: 1 00:23:53.296 NGUID/EUI64 Never Reused: No 00:23:53.296 Namespace Write Protected: No 00:23:53.296 Number of LBA Formats: 1 00:23:53.296 Current LBA Format: LBA Format #00 00:23:53.296 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.296 00:23:53.296 11:18:13 -- host/identify.sh@51 -- # sync 00:23:53.296 11:18:13 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.296 11:18:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.296 11:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:53.296 11:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.296 11:18:13 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:53.296 11:18:13 -- host/identify.sh@56 -- # nvmftestfini 00:23:53.296 11:18:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:53.296 11:18:13 -- nvmf/common.sh@116 -- # sync 00:23:53.296 11:18:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:53.296 11:18:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:53.296 11:18:13 -- nvmf/common.sh@119 -- # set +e 00:23:53.296 11:18:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:53.296 11:18:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:53.296 rmmod nvme_rdma 00:23:53.296 rmmod nvme_fabrics 00:23:53.296 11:18:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:53.296 11:18:13 -- nvmf/common.sh@123 -- # set -e 00:23:53.296 11:18:13 -- nvmf/common.sh@124 -- # return 0 00:23:53.296 11:18:13 -- nvmf/common.sh@477 -- # '[' -n 1722959 ']' 00:23:53.296 11:18:13 -- nvmf/common.sh@478 -- # killprocess 1722959 00:23:53.296 11:18:13 -- common/autotest_common.sh@936 -- # '[' -z 1722959 ']' 00:23:53.296 11:18:13 -- common/autotest_common.sh@940 -- # kill -0 1722959 00:23:53.296 11:18:13 -- common/autotest_common.sh@941 -- # uname 00:23:53.296 11:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:53.296 11:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1722959 00:23:53.554 11:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:53.554 11:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:53.554 11:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1722959' 00:23:53.554 killing process with pid 1722959 00:23:53.554 11:18:13 -- common/autotest_common.sh@955 -- # kill 1722959 00:23:53.554 [2024-12-13 11:18:13.856323] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:53.554 11:18:13 -- common/autotest_common.sh@960 -- # wait 1722959 00:23:53.812 11:18:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:53.812 11:18:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:53.812 00:23:53.812 real 0m7.608s 00:23:53.812 user 0m8.004s 00:23:53.812 sys 0m4.640s 00:23:53.812 11:18:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:53.812 11:18:14 -- common/autotest_common.sh@10 -- # set +x 00:23:53.812 ************************************ 00:23:53.812 END TEST nvmf_identify 00:23:53.812 ************************************ 00:23:53.812 11:18:14 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:53.812 11:18:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:53.812 11:18:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.812 11:18:14 -- common/autotest_common.sh@10 -- # set +x 00:23:53.812 ************************************ 00:23:53.812 START TEST nvmf_perf 00:23:53.812 ************************************ 00:23:53.812 11:18:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:53.812 * Looking for test storage... 00:23:53.812 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:53.812 11:18:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:53.812 11:18:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:53.812 11:18:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:53.812 11:18:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:53.813 11:18:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:53.813 11:18:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:53.813 11:18:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:53.813 11:18:14 -- scripts/common.sh@335 -- # IFS=.-: 00:23:53.813 11:18:14 -- scripts/common.sh@335 -- # read -ra ver1 00:23:53.813 11:18:14 -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.813 11:18:14 -- scripts/common.sh@336 -- # read -ra ver2 00:23:53.813 11:18:14 -- scripts/common.sh@337 -- # local 'op=<' 00:23:53.813 11:18:14 -- scripts/common.sh@339 -- # ver1_l=2 00:23:53.813 11:18:14 -- scripts/common.sh@340 -- # ver2_l=1 00:23:53.813 11:18:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:53.813 11:18:14 -- scripts/common.sh@343 -- # case "$op" in 00:23:53.813 11:18:14 -- scripts/common.sh@344 -- # : 1 00:23:53.813 11:18:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:53.813 11:18:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.813 11:18:14 -- scripts/common.sh@364 -- # decimal 1 00:23:53.813 11:18:14 -- scripts/common.sh@352 -- # local d=1 00:23:53.813 11:18:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.813 11:18:14 -- scripts/common.sh@354 -- # echo 1 00:23:53.813 11:18:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:53.813 11:18:14 -- scripts/common.sh@365 -- # decimal 2 00:23:53.813 11:18:14 -- scripts/common.sh@352 -- # local d=2 00:23:53.813 11:18:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.813 11:18:14 -- scripts/common.sh@354 -- # echo 2 00:23:53.813 11:18:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:53.813 11:18:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:53.813 11:18:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:53.813 11:18:14 -- scripts/common.sh@367 -- # return 0 00:23:53.813 11:18:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.813 11:18:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 11:18:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 11:18:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 11:18:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 11:18:14 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.813 11:18:14 -- nvmf/common.sh@7 -- # uname -s 00:23:53.813 11:18:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.813 11:18:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.813 11:18:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.813 11:18:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.813 11:18:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.813 11:18:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.813 11:18:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.813 11:18:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.813 11:18:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.813 11:18:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.813 11:18:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:53.813 11:18:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:53.813 11:18:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.813 11:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.813 11:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.813 11:18:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:53.813 11:18:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.813 11:18:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.813 11:18:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.813 11:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 11:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 11:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 11:18:14 -- paths/export.sh@5 -- # export PATH 00:23:53.813 11:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 11:18:14 -- nvmf/common.sh@46 -- # : 0 00:23:53.813 11:18:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:53.813 11:18:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:53.813 11:18:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:53.813 11:18:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.813 11:18:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.813 11:18:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:53.813 11:18:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:53.813 11:18:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:53.813 11:18:14 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:53.813 11:18:14 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:53.813 11:18:14 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:53.813 11:18:14 -- host/perf.sh@17 -- # nvmftestinit 00:23:53.813 11:18:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:53.813 11:18:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.813 11:18:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:53.813 11:18:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:53.813 11:18:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:53.813 11:18:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.813 11:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.813 11:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.813 11:18:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:53.813 11:18:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:53.813 11:18:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:53.813 11:18:14 -- common/autotest_common.sh@10 -- # set +x 00:24:00.371 11:18:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:00.371 11:18:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:00.371 11:18:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:00.371 11:18:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:00.371 11:18:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:00.371 11:18:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:00.371 11:18:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:00.371 11:18:19 -- nvmf/common.sh@294 -- # net_devs=() 00:24:00.371 11:18:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:00.371 11:18:19 -- nvmf/common.sh@295 -- # e810=() 00:24:00.371 11:18:19 -- nvmf/common.sh@295 -- # local -ga e810 00:24:00.371 11:18:19 -- nvmf/common.sh@296 -- # x722=() 00:24:00.371 11:18:19 -- nvmf/common.sh@296 -- # local -ga x722 00:24:00.371 11:18:19 -- nvmf/common.sh@297 -- # mlx=() 00:24:00.371 11:18:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:00.371 11:18:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.371 11:18:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:00.371 11:18:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:00.371 11:18:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:00.371 11:18:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:00.371 11:18:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:00.371 11:18:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:00.371 11:18:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:24:00.372 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:24:00.372 11:18:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:00.372 11:18:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:24:00.372 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:24:00.372 11:18:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:00.372 11:18:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.372 11:18:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.372 11:18:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:00.372 Found net devices under 0000:18:00.0: mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.372 11:18:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.372 11:18:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.372 11:18:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:00.372 Found net devices under 0000:18:00.1: mlx_0_1 00:24:00.372 11:18:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.372 11:18:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:00.372 11:18:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:00.372 11:18:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:00.372 11:18:19 -- nvmf/common.sh@57 -- # uname 00:24:00.372 11:18:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:00.372 11:18:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:00.372 11:18:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:00.372 11:18:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:00.372 11:18:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:00.372 11:18:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:00.372 11:18:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:00.372 11:18:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:00.372 11:18:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:00.372 11:18:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:00.372 11:18:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:00.372 11:18:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:00.372 11:18:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:00.372 11:18:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:00.372 11:18:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:00.372 11:18:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@104 -- # continue 2 00:24:00.372 11:18:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:00.372 11:18:19 -- nvmf/common.sh@104 -- # continue 2 00:24:00.372 11:18:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:00.372 11:18:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:00.372 11:18:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:00.372 11:18:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:00.372 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:00.372 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:24:00.372 altname enp24s0f0np0 00:24:00.372 altname ens785f0np0 00:24:00.372 inet 192.168.100.8/24 scope global mlx_0_0 00:24:00.372 valid_lft forever preferred_lft forever 00:24:00.372 11:18:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:00.372 11:18:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:00.372 11:18:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:00.372 11:18:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:00.372 11:18:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:00.372 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:00.372 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:24:00.372 altname enp24s0f1np1 00:24:00.372 altname ens785f1np1 00:24:00.372 inet 192.168.100.9/24 scope global mlx_0_1 00:24:00.372 valid_lft forever preferred_lft forever 00:24:00.372 11:18:19 -- nvmf/common.sh@410 -- # return 0 00:24:00.372 11:18:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:00.372 11:18:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:00.372 11:18:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:00.372 11:18:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:00.372 11:18:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:00.372 11:18:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:00.372 11:18:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:00.372 11:18:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:00.372 11:18:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:00.372 11:18:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@104 -- # continue 2 00:24:00.372 11:18:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.372 11:18:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:00.372 11:18:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:00.372 11:18:19 -- nvmf/common.sh@104 -- # continue 2 00:24:00.372 11:18:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:00.372 11:18:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:00.372 11:18:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:00.372 11:18:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:00.372 11:18:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:00.372 11:18:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:00.372 11:18:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:00.372 11:18:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:00.372 11:18:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:00.372 11:18:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:00.372 192.168.100.9' 00:24:00.372 11:18:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:00.372 192.168.100.9' 00:24:00.372 11:18:20 -- nvmf/common.sh@445 -- # head -n 1 00:24:00.372 11:18:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:00.372 11:18:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:00.372 192.168.100.9' 00:24:00.372 11:18:20 -- nvmf/common.sh@446 -- # tail -n +2 00:24:00.372 11:18:20 -- nvmf/common.sh@446 -- # head -n 1 00:24:00.372 11:18:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:00.372 11:18:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:00.372 11:18:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:00.372 11:18:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:00.372 11:18:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:00.372 11:18:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:00.372 11:18:20 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:00.372 11:18:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:00.372 11:18:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.372 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:24:00.372 11:18:20 -- nvmf/common.sh@469 -- # nvmfpid=1726621 00:24:00.372 11:18:20 -- nvmf/common.sh@470 -- # waitforlisten 1726621 00:24:00.372 11:18:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.372 11:18:20 -- common/autotest_common.sh@829 -- # '[' -z 1726621 ']' 00:24:00.372 11:18:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.372 11:18:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.372 11:18:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.373 11:18:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.373 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:24:00.373 [2024-12-13 11:18:20.106338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:00.373 [2024-12-13 11:18:20.106388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.373 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.373 [2024-12-13 11:18:20.162049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.373 [2024-12-13 11:18:20.235377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:00.373 [2024-12-13 11:18:20.235481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.373 [2024-12-13 11:18:20.235489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.373 [2024-12-13 11:18:20.235495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.373 [2024-12-13 11:18:20.235533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.373 [2024-12-13 11:18:20.235569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.373 [2024-12-13 11:18:20.235650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.373 [2024-12-13 11:18:20.235652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.373 11:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.373 11:18:20 -- common/autotest_common.sh@862 -- # return 0 00:24:00.373 11:18:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:00.373 11:18:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.373 11:18:20 -- common/autotest_common.sh@10 -- # set +x 00:24:00.630 11:18:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.630 11:18:20 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:00.630 11:18:20 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:03.908 11:18:23 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:03.908 11:18:23 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:03.908 11:18:24 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:03.908 11:18:24 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:03.908 11:18:24 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:03.908 11:18:24 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:03.908 11:18:24 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:03.908 11:18:24 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:24:03.908 11:18:24 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:24:04.166 [2024-12-13 11:18:24.480916] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:24:04.166 [2024-12-13 11:18:24.498772] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2138de0/0x2146940) succeed. 00:24:04.166 [2024-12-13 11:18:24.507095] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x213a3d0/0x21c6980) succeed. 00:24:04.166 11:18:24 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.424 11:18:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:04.424 11:18:24 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.424 11:18:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:04.424 11:18:24 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:04.681 11:18:25 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:04.938 [2024-12-13 11:18:25.280201] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:04.938 11:18:25 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:04.938 11:18:25 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:04.938 11:18:25 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:04.938 11:18:25 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:04.938 11:18:25 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:06.309 Initializing NVMe Controllers 00:24:06.309 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:06.309 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:06.309 Initialization complete. Launching workers. 00:24:06.309 ======================================================== 00:24:06.309 Latency(us) 00:24:06.309 Device Information : IOPS MiB/s Average min max 00:24:06.309 PCIE (0000:d8:00.0) NSID 1 from core 0: 108094.86 422.25 295.76 36.68 4225.95 00:24:06.309 ======================================================== 00:24:06.309 Total : 108094.86 422.25 295.76 36.68 4225.95 00:24:06.309 00:24:06.309 11:18:26 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:06.309 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.585 Initializing NVMe Controllers 00:24:09.585 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.585 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.585 Initialization complete. Launching workers. 00:24:09.585 ======================================================== 00:24:09.585 Latency(us) 00:24:09.585 Device Information : IOPS MiB/s Average min max 00:24:09.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7291.72 28.48 136.40 44.43 6066.67 00:24:09.585 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5563.95 21.73 179.55 64.48 6025.76 00:24:09.585 ======================================================== 00:24:09.585 Total : 12855.67 50.22 155.08 44.43 6066.67 00:24:09.585 00:24:09.585 11:18:30 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:09.585 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.863 Initializing NVMe Controllers 00:24:12.863 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.863 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.863 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.863 Initialization complete. Launching workers. 00:24:12.863 ======================================================== 00:24:12.863 Latency(us) 00:24:12.863 Device Information : IOPS MiB/s Average min max 00:24:12.863 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20566.00 80.34 1556.27 430.70 6842.63 00:24:12.863 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.47 5877.64 10019.89 00:24:12.863 ======================================================== 00:24:12.863 Total : 24598.00 96.09 2607.82 430.70 10019.89 00:24:12.863 00:24:12.863 11:18:33 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:24:12.863 11:18:33 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:13.121 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.299 Initializing NVMe Controllers 00:24:17.299 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.299 Controller IO queue size 128, less than required. 00:24:17.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.299 Controller IO queue size 128, less than required. 00:24:17.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.299 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.299 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.299 Initialization complete. Launching workers. 00:24:17.299 ======================================================== 00:24:17.299 Latency(us) 00:24:17.299 Device Information : IOPS MiB/s Average min max 00:24:17.299 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4314.54 1078.63 29774.68 13427.60 61564.26 00:24:17.299 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4357.53 1089.38 29098.76 13841.71 46692.31 00:24:17.299 ======================================================== 00:24:17.299 Total : 8672.07 2168.02 29435.04 13427.60 61564.26 00:24:17.299 00:24:17.299 11:18:37 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:24:17.299 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.557 No valid NVMe controllers or AIO or URING devices found 00:24:17.557 Initializing NVMe Controllers 00:24:17.557 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.557 Controller IO queue size 128, less than required. 00:24:17.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.557 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:17.557 Controller IO queue size 128, less than required. 00:24:17.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.557 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:17.557 WARNING: Some requested NVMe devices were skipped 00:24:17.557 11:18:38 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:24:17.557 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.816 Initializing NVMe Controllers 00:24:22.816 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.816 Controller IO queue size 128, less than required. 00:24:22.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.816 Controller IO queue size 128, less than required. 00:24:22.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:22.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:22.816 Initialization complete. Launching workers. 00:24:22.816 00:24:22.816 ==================== 00:24:22.816 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:22.816 RDMA transport: 00:24:22.816 dev name: mlx5_0 00:24:22.816 polls: 443761 00:24:22.816 idle_polls: 439461 00:24:22.816 completions: 48548 00:24:22.816 queued_requests: 1 00:24:22.816 total_send_wrs: 24367 00:24:22.816 send_doorbell_updates: 4098 00:24:22.816 total_recv_wrs: 24367 00:24:22.816 recv_doorbell_updates: 4099 00:24:22.816 --------------------------------- 00:24:22.816 00:24:22.816 ==================== 00:24:22.816 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:22.816 RDMA transport: 00:24:22.816 dev name: mlx5_0 00:24:22.816 polls: 442330 00:24:22.816 idle_polls: 442036 00:24:22.816 completions: 21281 00:24:22.816 queued_requests: 1 00:24:22.816 total_send_wrs: 10718 00:24:22.816 send_doorbell_updates: 261 00:24:22.816 total_recv_wrs: 10718 00:24:22.816 recv_doorbell_updates: 262 00:24:22.816 --------------------------------- 00:24:22.816 ======================================================== 00:24:22.816 Latency(us) 00:24:22.816 Device Information : IOPS MiB/s Average min max 00:24:22.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6112.37 1528.09 21004.84 10782.14 52395.55 00:24:22.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2706.57 676.64 47485.52 28260.25 71915.25 00:24:22.816 ======================================================== 00:24:22.816 Total : 8818.94 2204.73 29131.88 10782.14 71915.25 00:24:22.816 00:24:22.816 11:18:42 -- host/perf.sh@66 -- # sync 00:24:22.816 11:18:42 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.816 11:18:42 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:22.816 11:18:42 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:24:22.816 11:18:42 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:35.005 11:18:55 -- host/perf.sh@72 -- # ls_guid=c706908c-cef8-4369-b0bd-bfd8ad5a913d 00:24:35.005 11:18:55 -- host/perf.sh@73 -- # get_lvs_free_mb c706908c-cef8-4369-b0bd-bfd8ad5a913d 00:24:35.005 11:18:55 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c706908c-cef8-4369-b0bd-bfd8ad5a913d 00:24:35.005 11:18:55 -- common/autotest_common.sh@1354 -- # local lvs_info 00:24:35.005 11:18:55 -- common/autotest_common.sh@1355 -- # local fc 00:24:35.005 11:18:55 -- common/autotest_common.sh@1356 -- # local cs 00:24:35.005 11:18:55 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:35.005 11:18:55 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:24:35.005 { 00:24:35.005 "uuid": "c706908c-cef8-4369-b0bd-bfd8ad5a913d", 00:24:35.005 "name": "lvs_0", 00:24:35.005 "base_bdev": "Nvme0n1", 00:24:35.005 "total_data_clusters": 952929, 00:24:35.005 "free_clusters": 952929, 00:24:35.005 "block_size": 512, 00:24:35.005 "cluster_size": 4194304 00:24:35.005 } 00:24:35.005 ]' 00:24:35.005 11:18:55 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c706908c-cef8-4369-b0bd-bfd8ad5a913d") .free_clusters' 00:24:35.005 11:18:55 -- common/autotest_common.sh@1358 -- # fc=952929 00:24:35.005 11:18:55 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c706908c-cef8-4369-b0bd-bfd8ad5a913d") .cluster_size' 00:24:35.005 11:18:55 -- common/autotest_common.sh@1359 -- # cs=4194304 00:24:35.005 11:18:55 -- common/autotest_common.sh@1362 -- # free_mb=3811716 00:24:35.005 11:18:55 -- common/autotest_common.sh@1363 -- # echo 3811716 00:24:35.005 3811716 00:24:35.005 11:18:55 -- host/perf.sh@77 -- # '[' 3811716 -gt 20480 ']' 00:24:35.005 11:18:55 -- host/perf.sh@78 -- # free_mb=20480 00:24:35.005 11:18:55 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c706908c-cef8-4369-b0bd-bfd8ad5a913d lbd_0 20480 00:24:35.937 11:18:56 -- host/perf.sh@80 -- # lb_guid=23995171-ae90-4fed-af23-51741913eaa9 00:24:35.937 11:18:56 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 23995171-ae90-4fed-af23-51741913eaa9 lvs_n_0 00:24:37.835 11:18:57 -- host/perf.sh@83 -- # ls_nested_guid=d7b7287f-2c68-4d33-976f-cf35559d3032 00:24:37.835 11:18:57 -- host/perf.sh@84 -- # get_lvs_free_mb d7b7287f-2c68-4d33-976f-cf35559d3032 00:24:37.835 11:18:57 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d7b7287f-2c68-4d33-976f-cf35559d3032 00:24:37.835 11:18:57 -- common/autotest_common.sh@1354 -- # local lvs_info 00:24:37.835 11:18:57 -- common/autotest_common.sh@1355 -- # local fc 00:24:37.835 11:18:57 -- common/autotest_common.sh@1356 -- # local cs 00:24:37.835 11:18:57 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:37.835 11:18:58 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:24:37.835 { 00:24:37.835 "uuid": "c706908c-cef8-4369-b0bd-bfd8ad5a913d", 00:24:37.835 "name": "lvs_0", 00:24:37.835 "base_bdev": "Nvme0n1", 00:24:37.835 "total_data_clusters": 952929, 00:24:37.835 "free_clusters": 947809, 00:24:37.835 "block_size": 512, 00:24:37.835 "cluster_size": 4194304 00:24:37.835 }, 00:24:37.835 { 00:24:37.835 "uuid": "d7b7287f-2c68-4d33-976f-cf35559d3032", 00:24:37.835 "name": "lvs_n_0", 00:24:37.835 "base_bdev": "23995171-ae90-4fed-af23-51741913eaa9", 00:24:37.835 "total_data_clusters": 5114, 00:24:37.835 "free_clusters": 5114, 00:24:37.835 "block_size": 512, 00:24:37.835 "cluster_size": 4194304 00:24:37.835 } 00:24:37.835 ]' 00:24:37.835 11:18:58 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d7b7287f-2c68-4d33-976f-cf35559d3032") .free_clusters' 00:24:37.835 11:18:58 -- common/autotest_common.sh@1358 -- # fc=5114 00:24:37.835 11:18:58 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d7b7287f-2c68-4d33-976f-cf35559d3032") .cluster_size' 00:24:37.835 11:18:58 -- common/autotest_common.sh@1359 -- # cs=4194304 00:24:37.835 11:18:58 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:24:37.835 11:18:58 -- common/autotest_common.sh@1363 -- # echo 20456 00:24:37.835 20456 00:24:37.835 11:18:58 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:24:37.835 11:18:58 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d7b7287f-2c68-4d33-976f-cf35559d3032 lbd_nest_0 20456 00:24:37.835 11:18:58 -- host/perf.sh@88 -- # lb_nested_guid=2bc4c4c4-b0f7-4bc5-b32d-1e2ff838c3f9 00:24:37.835 11:18:58 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.092 11:18:58 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:38.092 11:18:58 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2bc4c4c4-b0f7-4bc5-b32d-1e2ff838c3f9 00:24:38.350 11:18:58 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:38.350 11:18:58 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:38.350 11:18:58 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:38.350 11:18:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:38.350 11:18:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:38.350 11:18:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:38.350 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.540 Initializing NVMe Controllers 00:24:50.540 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.540 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.540 Initialization complete. Launching workers. 00:24:50.540 ======================================================== 00:24:50.540 Latency(us) 00:24:50.540 Device Information : IOPS MiB/s Average min max 00:24:50.540 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6305.40 3.08 158.08 63.76 8078.72 00:24:50.540 ======================================================== 00:24:50.540 Total : 6305.40 3.08 158.08 63.76 8078.72 00:24:50.540 00:24:50.540 11:19:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:50.540 11:19:10 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:50.540 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.866 Initializing NVMe Controllers 00:25:02.866 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.866 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:02.866 Initialization complete. Launching workers. 00:25:02.866 ======================================================== 00:25:02.866 Latency(us) 00:25:02.866 Device Information : IOPS MiB/s Average min max 00:25:02.866 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2736.60 342.07 365.08 148.99 7157.59 00:25:02.866 ======================================================== 00:25:02.867 Total : 2736.60 342.07 365.08 148.99 7157.59 00:25:02.867 00:25:02.867 11:19:21 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:02.867 11:19:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:02.867 11:19:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:02.867 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.846 Initializing NVMe Controllers 00:25:12.846 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.846 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:12.846 Initialization complete. Launching workers. 00:25:12.846 ======================================================== 00:25:12.846 Latency(us) 00:25:12.846 Device Information : IOPS MiB/s Average min max 00:25:12.846 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12483.88 6.10 2563.37 866.33 7525.35 00:25:12.846 ======================================================== 00:25:12.846 Total : 12483.88 6.10 2563.37 866.33 7525.35 00:25:12.846 00:25:12.846 11:19:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:12.846 11:19:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:12.846 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.054 Initializing NVMe Controllers 00:25:25.054 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.054 Initialization complete. Launching workers. 00:25:25.054 ======================================================== 00:25:25.054 Latency(us) 00:25:25.054 Device Information : IOPS MiB/s Average min max 00:25:25.054 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3990.81 498.85 8024.38 3918.39 16029.66 00:25:25.054 ======================================================== 00:25:25.054 Total : 3990.81 498.85 8024.38 3918.39 16029.66 00:25:25.054 00:25:25.054 11:19:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:25.054 11:19:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:25.054 11:19:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:25.054 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.265 Initializing NVMe Controllers 00:25:37.265 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.265 Controller IO queue size 128, less than required. 00:25:37.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:37.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.265 Initialization complete. Launching workers. 00:25:37.265 ======================================================== 00:25:37.265 Latency(us) 00:25:37.265 Device Information : IOPS MiB/s Average min max 00:25:37.265 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20910.17 10.21 6123.81 1641.21 16657.28 00:25:37.265 ======================================================== 00:25:37.265 Total : 20910.17 10.21 6123.81 1641.21 16657.28 00:25:37.265 00:25:37.265 11:19:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:37.265 11:19:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:37.265 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.250 Initializing NVMe Controllers 00:25:47.250 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.250 Controller IO queue size 128, less than required. 00:25:47.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:47.250 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:47.250 Initialization complete. Launching workers. 00:25:47.250 ======================================================== 00:25:47.250 Latency(us) 00:25:47.250 Device Information : IOPS MiB/s Average min max 00:25:47.250 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11986.63 1498.33 10682.12 3391.67 23836.21 00:25:47.250 ======================================================== 00:25:47.250 Total : 11986.63 1498.33 10682.12 3391.67 23836.21 00:25:47.250 00:25:47.250 11:20:06 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.250 11:20:07 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2bc4c4c4-b0f7-4bc5-b32d-1e2ff838c3f9 00:25:47.250 11:20:07 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:47.509 11:20:07 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 23995171-ae90-4fed-af23-51741913eaa9 00:25:47.768 11:20:08 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:48.027 11:20:08 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:48.027 11:20:08 -- host/perf.sh@114 -- # nvmftestfini 00:25:48.027 11:20:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:48.027 11:20:08 -- nvmf/common.sh@116 -- # sync 00:25:48.027 11:20:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:48.027 11:20:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:48.027 11:20:08 -- nvmf/common.sh@119 -- # set +e 00:25:48.027 11:20:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:48.027 11:20:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:48.027 rmmod nvme_rdma 00:25:48.027 rmmod nvme_fabrics 00:25:48.027 11:20:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:48.027 11:20:08 -- nvmf/common.sh@123 -- # set -e 00:25:48.027 11:20:08 -- nvmf/common.sh@124 -- # return 0 00:25:48.027 11:20:08 -- nvmf/common.sh@477 -- # '[' -n 1726621 ']' 00:25:48.027 11:20:08 -- nvmf/common.sh@478 -- # killprocess 1726621 00:25:48.027 11:20:08 -- common/autotest_common.sh@936 -- # '[' -z 1726621 ']' 00:25:48.027 11:20:08 -- common/autotest_common.sh@940 -- # kill -0 1726621 00:25:48.027 11:20:08 -- common/autotest_common.sh@941 -- # uname 00:25:48.027 11:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.027 11:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1726621 00:25:48.027 11:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.027 11:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.027 11:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1726621' 00:25:48.027 killing process with pid 1726621 00:25:48.027 11:20:08 -- common/autotest_common.sh@955 -- # kill 1726621 00:25:48.027 11:20:08 -- common/autotest_common.sh@960 -- # wait 1726621 00:25:52.222 11:20:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:52.222 11:20:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:52.222 00:25:52.222 real 1m58.159s 00:25:52.222 user 7m31.569s 00:25:52.222 sys 0m5.918s 00:25:52.222 11:20:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:52.222 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:25:52.222 ************************************ 00:25:52.222 END TEST nvmf_perf 00:25:52.222 ************************************ 00:25:52.222 11:20:12 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:25:52.223 11:20:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:52.223 11:20:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:52.223 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:25:52.223 ************************************ 00:25:52.223 START TEST nvmf_fio_host 00:25:52.223 ************************************ 00:25:52.223 11:20:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:25:52.223 * Looking for test storage... 00:25:52.223 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:52.223 11:20:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:52.223 11:20:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:52.223 11:20:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:52.223 11:20:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:52.223 11:20:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:52.223 11:20:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:52.223 11:20:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:52.223 11:20:12 -- scripts/common.sh@335 -- # IFS=.-: 00:25:52.223 11:20:12 -- scripts/common.sh@335 -- # read -ra ver1 00:25:52.223 11:20:12 -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.223 11:20:12 -- scripts/common.sh@336 -- # read -ra ver2 00:25:52.223 11:20:12 -- scripts/common.sh@337 -- # local 'op=<' 00:25:52.223 11:20:12 -- scripts/common.sh@339 -- # ver1_l=2 00:25:52.223 11:20:12 -- scripts/common.sh@340 -- # ver2_l=1 00:25:52.223 11:20:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:52.223 11:20:12 -- scripts/common.sh@343 -- # case "$op" in 00:25:52.223 11:20:12 -- scripts/common.sh@344 -- # : 1 00:25:52.223 11:20:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:52.223 11:20:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.223 11:20:12 -- scripts/common.sh@364 -- # decimal 1 00:25:52.223 11:20:12 -- scripts/common.sh@352 -- # local d=1 00:25:52.223 11:20:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.223 11:20:12 -- scripts/common.sh@354 -- # echo 1 00:25:52.223 11:20:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:52.223 11:20:12 -- scripts/common.sh@365 -- # decimal 2 00:25:52.223 11:20:12 -- scripts/common.sh@352 -- # local d=2 00:25:52.223 11:20:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.223 11:20:12 -- scripts/common.sh@354 -- # echo 2 00:25:52.223 11:20:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:52.223 11:20:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:52.223 11:20:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:52.223 11:20:12 -- scripts/common.sh@367 -- # return 0 00:25:52.223 11:20:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.223 11:20:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:52.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.223 --rc genhtml_branch_coverage=1 00:25:52.223 --rc genhtml_function_coverage=1 00:25:52.223 --rc genhtml_legend=1 00:25:52.223 --rc geninfo_all_blocks=1 00:25:52.223 --rc geninfo_unexecuted_blocks=1 00:25:52.223 00:25:52.223 ' 00:25:52.223 11:20:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:52.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.223 --rc genhtml_branch_coverage=1 00:25:52.223 --rc genhtml_function_coverage=1 00:25:52.223 --rc genhtml_legend=1 00:25:52.223 --rc geninfo_all_blocks=1 00:25:52.223 --rc geninfo_unexecuted_blocks=1 00:25:52.223 00:25:52.223 ' 00:25:52.223 11:20:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:52.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.223 --rc genhtml_branch_coverage=1 00:25:52.223 --rc genhtml_function_coverage=1 00:25:52.223 --rc genhtml_legend=1 00:25:52.223 --rc geninfo_all_blocks=1 00:25:52.223 --rc geninfo_unexecuted_blocks=1 00:25:52.223 00:25:52.223 ' 00:25:52.223 11:20:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:52.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.223 --rc genhtml_branch_coverage=1 00:25:52.223 --rc genhtml_function_coverage=1 00:25:52.223 --rc genhtml_legend=1 00:25:52.223 --rc geninfo_all_blocks=1 00:25:52.223 --rc geninfo_unexecuted_blocks=1 00:25:52.223 00:25:52.223 ' 00:25:52.223 11:20:12 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:52.223 11:20:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.223 11:20:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.223 11:20:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.223 11:20:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- paths/export.sh@5 -- # export PATH 00:25:52.223 11:20:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.223 11:20:12 -- nvmf/common.sh@7 -- # uname -s 00:25:52.223 11:20:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.223 11:20:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.223 11:20:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.223 11:20:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.223 11:20:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.223 11:20:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.223 11:20:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.223 11:20:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.223 11:20:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.223 11:20:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.223 11:20:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:52.223 11:20:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:25:52.223 11:20:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.223 11:20:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.223 11:20:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.223 11:20:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:52.223 11:20:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.223 11:20:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.223 11:20:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.223 11:20:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- paths/export.sh@5 -- # export PATH 00:25:52.223 11:20:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.223 11:20:12 -- nvmf/common.sh@46 -- # : 0 00:25:52.223 11:20:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:52.223 11:20:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:52.223 11:20:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:52.223 11:20:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.223 11:20:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.223 11:20:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:52.223 11:20:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:52.223 11:20:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:52.223 11:20:12 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:52.223 11:20:12 -- host/fio.sh@14 -- # nvmftestinit 00:25:52.223 11:20:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:52.223 11:20:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.224 11:20:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:52.224 11:20:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:52.224 11:20:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:52.224 11:20:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.224 11:20:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.224 11:20:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.224 11:20:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:52.224 11:20:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:52.224 11:20:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:52.224 11:20:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.500 11:20:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:57.500 11:20:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:57.500 11:20:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:57.500 11:20:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:57.500 11:20:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:57.500 11:20:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:57.500 11:20:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:57.500 11:20:17 -- nvmf/common.sh@294 -- # net_devs=() 00:25:57.500 11:20:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:57.500 11:20:17 -- nvmf/common.sh@295 -- # e810=() 00:25:57.500 11:20:17 -- nvmf/common.sh@295 -- # local -ga e810 00:25:57.500 11:20:17 -- nvmf/common.sh@296 -- # x722=() 00:25:57.500 11:20:17 -- nvmf/common.sh@296 -- # local -ga x722 00:25:57.500 11:20:17 -- nvmf/common.sh@297 -- # mlx=() 00:25:57.500 11:20:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:57.500 11:20:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.500 11:20:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:57.500 11:20:17 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:57.500 11:20:17 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:57.500 11:20:17 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:57.500 11:20:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:57.500 11:20:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:57.500 11:20:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:25:57.500 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:25:57.500 11:20:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:57.500 11:20:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:57.500 11:20:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:25:57.500 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:25:57.500 11:20:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:57.500 11:20:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:57.500 11:20:17 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:57.500 11:20:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:57.500 11:20:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.500 11:20:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:57.500 11:20:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.500 11:20:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:25:57.500 Found net devices under 0000:18:00.0: mlx_0_0 00:25:57.500 11:20:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.500 11:20:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:57.500 11:20:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.500 11:20:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:57.500 11:20:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.500 11:20:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:25:57.500 Found net devices under 0000:18:00.1: mlx_0_1 00:25:57.500 11:20:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.500 11:20:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:57.500 11:20:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:57.500 11:20:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:57.500 11:20:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:57.500 11:20:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:57.500 11:20:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:57.500 11:20:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:57.500 11:20:18 -- nvmf/common.sh@57 -- # uname 00:25:57.500 11:20:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:57.500 11:20:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:57.500 11:20:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:57.500 11:20:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:57.500 11:20:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:57.500 11:20:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:57.500 11:20:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:57.501 11:20:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:57.501 11:20:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:57.501 11:20:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:57.501 11:20:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:57.501 11:20:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:57.501 11:20:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:57.501 11:20:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:57.501 11:20:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:57.501 11:20:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:57.501 11:20:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@104 -- # continue 2 00:25:57.760 11:20:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:57.760 11:20:18 -- nvmf/common.sh@104 -- # continue 2 00:25:57.760 11:20:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:57.760 11:20:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:57.760 11:20:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:57.760 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:57.760 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:25:57.760 altname enp24s0f0np0 00:25:57.760 altname ens785f0np0 00:25:57.760 inet 192.168.100.8/24 scope global mlx_0_0 00:25:57.760 valid_lft forever preferred_lft forever 00:25:57.760 11:20:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:57.760 11:20:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:57.760 11:20:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:57.760 11:20:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:57.760 11:20:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:57.760 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:57.760 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:25:57.760 altname enp24s0f1np1 00:25:57.760 altname ens785f1np1 00:25:57.760 inet 192.168.100.9/24 scope global mlx_0_1 00:25:57.760 valid_lft forever preferred_lft forever 00:25:57.760 11:20:18 -- nvmf/common.sh@410 -- # return 0 00:25:57.760 11:20:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:57.760 11:20:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:57.760 11:20:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:57.760 11:20:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:57.760 11:20:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:57.760 11:20:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:57.760 11:20:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:57.760 11:20:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:57.760 11:20:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:57.760 11:20:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@104 -- # continue 2 00:25:57.760 11:20:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:57.760 11:20:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:57.760 11:20:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:57.760 11:20:18 -- nvmf/common.sh@104 -- # continue 2 00:25:57.760 11:20:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:57.760 11:20:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:57.760 11:20:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:57.761 11:20:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:57.761 11:20:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:57.761 11:20:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:57.761 11:20:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:57.761 11:20:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:57.761 11:20:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:57.761 11:20:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:57.761 11:20:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:57.761 11:20:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:57.761 192.168.100.9' 00:25:57.761 11:20:18 -- nvmf/common.sh@445 -- # head -n 1 00:25:57.761 11:20:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:57.761 192.168.100.9' 00:25:57.761 11:20:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:57.761 11:20:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:57.761 192.168.100.9' 00:25:57.761 11:20:18 -- nvmf/common.sh@446 -- # tail -n +2 00:25:57.761 11:20:18 -- nvmf/common.sh@446 -- # head -n 1 00:25:57.761 11:20:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:57.761 11:20:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:57.761 11:20:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:57.761 11:20:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:57.761 11:20:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:57.761 11:20:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:57.761 11:20:18 -- host/fio.sh@16 -- # [[ y != y ]] 00:25:57.761 11:20:18 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:57.761 11:20:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:57.761 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:57.761 11:20:18 -- host/fio.sh@24 -- # nvmfpid=1749624 00:25:57.761 11:20:18 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.761 11:20:18 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.761 11:20:18 -- host/fio.sh@28 -- # waitforlisten 1749624 00:25:57.761 11:20:18 -- common/autotest_common.sh@829 -- # '[' -z 1749624 ']' 00:25:57.761 11:20:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.761 11:20:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.761 11:20:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.761 11:20:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.761 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:57.761 [2024-12-13 11:20:18.232613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.761 [2024-12-13 11:20:18.232656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.761 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.761 [2024-12-13 11:20:18.281526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.020 [2024-12-13 11:20:18.353374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:58.020 [2024-12-13 11:20:18.353470] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.020 [2024-12-13 11:20:18.353477] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.020 [2024-12-13 11:20:18.353483] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.020 [2024-12-13 11:20:18.353526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.020 [2024-12-13 11:20:18.353609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.020 [2024-12-13 11:20:18.353695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.020 [2024-12-13 11:20:18.353696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.589 11:20:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.589 11:20:19 -- common/autotest_common.sh@862 -- # return 0 00:25:58.589 11:20:19 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:58.848 [2024-12-13 11:20:19.188442] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1957960/0x195be50) succeed. 00:25:58.848 [2024-12-13 11:20:19.196593] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1958f50/0x199d4f0) succeed. 00:25:58.848 11:20:19 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:58.848 11:20:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:58.848 11:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:58.848 11:20:19 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:59.107 Malloc1 00:25:59.107 11:20:19 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.366 11:20:19 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:59.367 11:20:19 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:59.625 [2024-12-13 11:20:20.041440] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:59.625 11:20:20 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:59.884 11:20:20 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:25:59.884 11:20:20 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:25:59.884 11:20:20 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:25:59.884 11:20:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:59.884 11:20:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.884 11:20:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:59.884 11:20:20 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:25:59.884 11:20:20 -- common/autotest_common.sh@1330 -- # shift 00:25:59.884 11:20:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:59.884 11:20:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:59.884 11:20:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:59.884 11:20:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:59.884 11:20:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:59.885 11:20:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:59.885 11:20:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:59.885 11:20:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:00.143 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:00.143 fio-3.35 00:26:00.143 Starting 1 thread 00:26:00.143 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.677 00:26:02.677 test: (groupid=0, jobs=1): err= 0: pid=1750312: Fri Dec 13 11:20:22 2024 00:26:02.677 read: IOPS=20.2k, BW=78.8MiB/s (82.6MB/s)(158MiB/2003msec) 00:26:02.677 slat (nsec): min=1266, max=104005, avg=1368.74, stdev=612.10 00:26:02.677 clat (usec): min=1421, max=5675, avg=3152.97, stdev=64.56 00:26:02.677 lat (usec): min=1441, max=5677, avg=3154.34, stdev=64.49 00:26:02.677 clat percentiles (usec): 00:26:02.677 | 1.00th=[ 3130], 5.00th=[ 3130], 10.00th=[ 3130], 20.00th=[ 3130], 00:26:02.677 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3163], 60.00th=[ 3163], 00:26:02.677 | 70.00th=[ 3163], 80.00th=[ 3163], 90.00th=[ 3163], 95.00th=[ 3163], 00:26:02.677 | 99.00th=[ 3195], 99.50th=[ 3228], 99.90th=[ 3523], 99.95th=[ 4817], 00:26:02.677 | 99.99th=[ 5604] 00:26:02.677 bw ( KiB/s): min=79056, max=81488, per=100.00%, avg=80720.00, stdev=1130.47, samples=4 00:26:02.677 iops : min=19764, max=20372, avg=20180.00, stdev=282.62, samples=4 00:26:02.677 write: IOPS=20.1k, BW=78.7MiB/s (82.5MB/s)(158MiB/2003msec); 0 zone resets 00:26:02.677 slat (nsec): min=1298, max=20072, avg=1725.91, stdev=429.46 00:26:02.677 clat (usec): min=2106, max=5670, avg=3152.02, stdev=71.55 00:26:02.677 lat (usec): min=2114, max=5671, avg=3153.74, stdev=71.50 00:26:02.677 clat percentiles (usec): 00:26:02.677 | 1.00th=[ 3097], 5.00th=[ 3130], 10.00th=[ 3130], 20.00th=[ 3130], 00:26:02.677 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3163], 60.00th=[ 3163], 00:26:02.677 | 70.00th=[ 3163], 80.00th=[ 3163], 90.00th=[ 3163], 95.00th=[ 3163], 00:26:02.677 | 99.00th=[ 3195], 99.50th=[ 3261], 99.90th=[ 4080], 99.95th=[ 4883], 00:26:02.677 | 99.99th=[ 5604] 00:26:02.677 bw ( KiB/s): min=78864, max=81416, per=99.96%, avg=80512.00, stdev=1156.09, samples=4 00:26:02.677 iops : min=19716, max=20354, avg=20128.00, stdev=289.02, samples=4 00:26:02.677 lat (msec) : 2=0.01%, 4=99.90%, 10=0.10% 00:26:02.677 cpu : usr=99.55%, sys=0.05%, ctx=17, majf=0, minf=2 00:26:02.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:02.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.677 issued rwts: total=40409,40334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.677 00:26:02.677 Run status group 0 (all jobs): 00:26:02.677 READ: bw=78.8MiB/s (82.6MB/s), 78.8MiB/s-78.8MiB/s (82.6MB/s-82.6MB/s), io=158MiB (166MB), run=2003-2003msec 00:26:02.677 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=158MiB (165MB), run=2003-2003msec 00:26:02.677 11:20:22 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:02.677 11:20:22 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:02.677 11:20:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:02.677 11:20:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.677 11:20:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:02.677 11:20:22 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:02.677 11:20:22 -- common/autotest_common.sh@1330 -- # shift 00:26:02.677 11:20:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:02.677 11:20:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.677 11:20:22 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:02.678 11:20:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:02.678 11:20:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:02.678 11:20:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:02.678 11:20:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:02.678 11:20:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:02.678 11:20:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:02.678 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:02.678 fio-3.35 00:26:02.678 Starting 1 thread 00:26:02.936 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.468 00:26:05.468 test: (groupid=0, jobs=1): err= 0: pid=1750957: Fri Dec 13 11:20:25 2024 00:26:05.468 read: IOPS=15.9k, BW=248MiB/s (260MB/s)(489MiB/1969msec) 00:26:05.468 slat (nsec): min=2109, max=35531, avg=2393.20, stdev=955.18 00:26:05.468 clat (usec): min=420, max=8120, avg=1509.28, stdev=1216.33 00:26:05.468 lat (usec): min=422, max=8135, avg=1511.67, stdev=1216.64 00:26:05.468 clat percentiles (usec): 00:26:05.468 | 1.00th=[ 619], 5.00th=[ 701], 10.00th=[ 758], 20.00th=[ 832], 00:26:05.468 | 30.00th=[ 898], 40.00th=[ 979], 50.00th=[ 1074], 60.00th=[ 1172], 00:26:05.468 | 70.00th=[ 1303], 80.00th=[ 1500], 90.00th=[ 4293], 95.00th=[ 4490], 00:26:05.468 | 99.00th=[ 5800], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 6915], 00:26:05.468 | 99.99th=[ 7767] 00:26:05.468 bw ( KiB/s): min=119456, max=126944, per=48.49%, avg=123200.00, stdev=3304.22, samples=4 00:26:05.468 iops : min= 7466, max= 7934, avg=7700.00, stdev=206.51, samples=4 00:26:05.468 write: IOPS=9029, BW=141MiB/s (148MB/s)(251MiB/1776msec); 0 zone resets 00:26:05.468 slat (usec): min=24, max=128, avg=28.04, stdev= 5.87 00:26:05.468 clat (usec): min=3715, max=18358, avg=11324.41, stdev=1651.56 00:26:05.468 lat (usec): min=3740, max=18383, avg=11352.45, stdev=1651.19 00:26:05.468 clat percentiles (usec): 00:26:05.468 | 1.00th=[ 6521], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:26:05.468 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:26:05.468 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13304], 95.00th=[13829], 00:26:05.468 | 99.00th=[15401], 99.50th=[16057], 99.90th=[17433], 99.95th=[17957], 00:26:05.468 | 99.99th=[18220] 00:26:05.468 bw ( KiB/s): min=123552, max=132736, per=88.19%, avg=127400.00, stdev=3920.44, samples=4 00:26:05.468 iops : min= 7722, max= 8296, avg=7962.50, stdev=245.03, samples=4 00:26:05.468 lat (usec) : 500=0.01%, 750=6.23%, 1000=21.93% 00:26:05.468 lat (msec) : 2=28.69%, 4=2.19%, 10=13.29%, 20=27.65% 00:26:05.468 cpu : usr=96.31%, sys=1.75%, ctx=222, majf=0, minf=1 00:26:05.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:05.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:05.468 issued rwts: total=31266,16036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:05.468 00:26:05.468 Run status group 0 (all jobs): 00:26:05.468 READ: bw=248MiB/s (260MB/s), 248MiB/s-248MiB/s (260MB/s-260MB/s), io=489MiB (512MB), run=1969-1969msec 00:26:05.468 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=251MiB (263MB), run=1776-1776msec 00:26:05.468 11:20:25 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.468 11:20:25 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:05.468 11:20:25 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:05.468 11:20:25 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:05.468 11:20:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:05.468 11:20:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:05.468 11:20:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:05.468 11:20:25 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:05.468 11:20:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:05.468 11:20:25 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:05.468 11:20:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:26:05.468 11:20:25 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:26:08.753 Nvme0n1 00:26:08.753 11:20:28 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:21.008 11:20:39 -- host/fio.sh@53 -- # ls_guid=05b8ad13-d4aa-4f1a-be56-1449bc06168b 00:26:21.008 11:20:39 -- host/fio.sh@54 -- # get_lvs_free_mb 05b8ad13-d4aa-4f1a-be56-1449bc06168b 00:26:21.008 11:20:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=05b8ad13-d4aa-4f1a-be56-1449bc06168b 00:26:21.008 11:20:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:21.008 11:20:39 -- common/autotest_common.sh@1355 -- # local fc 00:26:21.008 11:20:39 -- common/autotest_common.sh@1356 -- # local cs 00:26:21.008 11:20:39 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:21.008 11:20:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:21.008 { 00:26:21.008 "uuid": "05b8ad13-d4aa-4f1a-be56-1449bc06168b", 00:26:21.008 "name": "lvs_0", 00:26:21.008 "base_bdev": "Nvme0n1", 00:26:21.008 "total_data_clusters": 3725, 00:26:21.008 "free_clusters": 3725, 00:26:21.008 "block_size": 512, 00:26:21.008 "cluster_size": 1073741824 00:26:21.008 } 00:26:21.008 ]' 00:26:21.008 11:20:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="05b8ad13-d4aa-4f1a-be56-1449bc06168b") .free_clusters' 00:26:21.008 11:20:39 -- common/autotest_common.sh@1358 -- # fc=3725 00:26:21.008 11:20:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="05b8ad13-d4aa-4f1a-be56-1449bc06168b") .cluster_size' 00:26:21.008 11:20:39 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:26:21.008 11:20:39 -- common/autotest_common.sh@1362 -- # free_mb=3814400 00:26:21.008 11:20:39 -- common/autotest_common.sh@1363 -- # echo 3814400 00:26:21.008 3814400 00:26:21.008 11:20:39 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 3814400 00:26:21.008 c2f18463-2e28-4679-8a63-576bd1c79f77 00:26:21.008 11:20:40 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:21.008 11:20:40 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:21.008 11:20:40 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:21.008 11:20:41 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:21.008 11:20:41 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:21.008 11:20:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:21.008 11:20:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.008 11:20:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:21.008 11:20:41 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:21.008 11:20:41 -- common/autotest_common.sh@1330 -- # shift 00:26:21.008 11:20:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:21.008 11:20:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:21.008 11:20:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:21.008 11:20:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:21.008 11:20:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:21.008 11:20:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:21.008 11:20:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:21.008 11:20:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:21.008 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:21.008 fio-3.35 00:26:21.008 Starting 1 thread 00:26:21.008 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.544 00:26:23.544 test: (groupid=0, jobs=1): err= 0: pid=1754315: Fri Dec 13 11:20:43 2024 00:26:23.544 read: IOPS=7340, BW=28.7MiB/s (30.1MB/s)(57.5MiB/2005msec) 00:26:23.544 slat (nsec): min=1269, max=27761, avg=1383.18, stdev=378.33 00:26:23.544 clat (usec): min=143, max=885238, avg=8707.70, stdev=58706.48 00:26:23.544 lat (usec): min=145, max=885253, avg=8709.09, stdev=58706.54 00:26:23.544 clat percentiles (msec): 00:26:23.544 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5], 00:26:23.544 | 30.00th=[ 5], 40.00th=[ 5], 50.00th=[ 5], 60.00th=[ 5], 00:26:23.544 | 70.00th=[ 5], 80.00th=[ 5], 90.00th=[ 5], 95.00th=[ 5], 00:26:23.544 | 99.00th=[ 6], 99.50th=[ 9], 99.90th=[ 885], 99.95th=[ 885], 00:26:23.544 | 99.99th=[ 885] 00:26:23.544 bw ( KiB/s): min= 384, max=53776, per=99.77%, avg=29294.00, stdev=28300.64, samples=4 00:26:23.544 iops : min= 96, max=13444, avg=7323.50, stdev=7075.16, samples=4 00:26:23.544 write: IOPS=7301, BW=28.5MiB/s (29.9MB/s)(57.2MiB/2005msec); 0 zone resets 00:26:23.544 slat (nsec): min=1307, max=17740, avg=1756.77, stdev=363.13 00:26:23.544 clat (usec): min=321, max=885613, avg=8523.43, stdev=57053.68 00:26:23.544 lat (usec): min=323, max=885617, avg=8525.19, stdev=57053.73 00:26:23.544 clat percentiles (msec): 00:26:23.544 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5], 00:26:23.544 | 30.00th=[ 5], 40.00th=[ 5], 50.00th=[ 5], 60.00th=[ 5], 00:26:23.544 | 70.00th=[ 5], 80.00th=[ 5], 90.00th=[ 5], 95.00th=[ 5], 00:26:23.544 | 99.00th=[ 6], 99.50th=[ 8], 99.90th=[ 885], 99.95th=[ 885], 00:26:23.544 | 99.99th=[ 885] 00:26:23.544 bw ( KiB/s): min= 416, max=53360, per=99.86%, avg=29166.00, stdev=27989.49, samples=4 00:26:23.544 iops : min= 104, max=13340, avg=7291.50, stdev=6997.37, samples=4 00:26:23.544 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:26:23.544 lat (msec) : 2=0.05%, 4=0.63%, 10=98.84%, 1000=0.44% 00:26:23.544 cpu : usr=99.60%, sys=0.00%, ctx=17, majf=0, minf=2 00:26:23.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:23.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:23.544 issued rwts: total=14717,14640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:23.544 00:26:23.544 Run status group 0 (all jobs): 00:26:23.544 READ: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=57.5MiB (60.3MB), run=2005-2005msec 00:26:23.544 WRITE: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=57.2MiB (60.0MB), run=2005-2005msec 00:26:23.544 11:20:43 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:23.544 11:20:44 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:26.083 11:20:46 -- host/fio.sh@64 -- # ls_nested_guid=d0dff807-84c9-40f6-9c08-9ffa40d21154 00:26:26.083 11:20:46 -- host/fio.sh@65 -- # get_lvs_free_mb d0dff807-84c9-40f6-9c08-9ffa40d21154 00:26:26.083 11:20:46 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d0dff807-84c9-40f6-9c08-9ffa40d21154 00:26:26.083 11:20:46 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:26.083 11:20:46 -- common/autotest_common.sh@1355 -- # local fc 00:26:26.083 11:20:46 -- common/autotest_common.sh@1356 -- # local cs 00:26:26.083 11:20:46 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:26.083 11:20:46 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:26.083 { 00:26:26.083 "uuid": "05b8ad13-d4aa-4f1a-be56-1449bc06168b", 00:26:26.083 "name": "lvs_0", 00:26:26.083 "base_bdev": "Nvme0n1", 00:26:26.083 "total_data_clusters": 3725, 00:26:26.083 "free_clusters": 0, 00:26:26.083 "block_size": 512, 00:26:26.083 "cluster_size": 1073741824 00:26:26.083 }, 00:26:26.083 { 00:26:26.083 "uuid": "d0dff807-84c9-40f6-9c08-9ffa40d21154", 00:26:26.083 "name": "lvs_n_0", 00:26:26.083 "base_bdev": "c2f18463-2e28-4679-8a63-576bd1c79f77", 00:26:26.083 "total_data_clusters": 952668, 00:26:26.083 "free_clusters": 952668, 00:26:26.083 "block_size": 512, 00:26:26.083 "cluster_size": 4194304 00:26:26.083 } 00:26:26.083 ]' 00:26:26.083 11:20:46 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d0dff807-84c9-40f6-9c08-9ffa40d21154") .free_clusters' 00:26:26.083 11:20:46 -- common/autotest_common.sh@1358 -- # fc=952668 00:26:26.083 11:20:46 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d0dff807-84c9-40f6-9c08-9ffa40d21154") .cluster_size' 00:26:26.083 11:20:46 -- common/autotest_common.sh@1359 -- # cs=4194304 00:26:26.083 11:20:46 -- common/autotest_common.sh@1362 -- # free_mb=3810672 00:26:26.083 11:20:46 -- common/autotest_common.sh@1363 -- # echo 3810672 00:26:26.083 3810672 00:26:26.083 11:20:46 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 3810672 00:26:27.462 3a58dac2-ce33-40bd-95c7-3f80eaec6872 00:26:27.462 11:20:48 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:27.721 11:20:48 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:27.980 11:20:48 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:28.263 11:20:48 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:28.263 11:20:48 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:28.263 11:20:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:28.263 11:20:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.263 11:20:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:28.263 11:20:48 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:28.263 11:20:48 -- common/autotest_common.sh@1330 -- # shift 00:26:28.263 11:20:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:28.263 11:20:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:28.263 11:20:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:28.263 11:20:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:28.263 11:20:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:28.263 11:20:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:28.263 11:20:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:28.263 11:20:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:28.526 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:28.526 fio-3.35 00:26:28.526 Starting 1 thread 00:26:28.526 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.173 00:26:31.173 test: (groupid=0, jobs=1): err= 0: pid=1755745: Fri Dec 13 11:20:51 2024 00:26:31.173 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(87.6MiB/2005msec) 00:26:31.173 slat (nsec): min=1296, max=19156, avg=1390.12, stdev=232.24 00:26:31.173 clat (usec): min=3090, max=9903, avg=5639.32, stdev=191.17 00:26:31.173 lat (usec): min=3093, max=9904, avg=5640.71, stdev=191.14 00:26:31.173 clat percentiles (usec): 00:26:31.173 | 1.00th=[ 5538], 5.00th=[ 5604], 10.00th=[ 5604], 20.00th=[ 5604], 00:26:31.173 | 30.00th=[ 5604], 40.00th=[ 5604], 50.00th=[ 5604], 60.00th=[ 5604], 00:26:31.173 | 70.00th=[ 5669], 80.00th=[ 5669], 90.00th=[ 5669], 95.00th=[ 5669], 00:26:31.173 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 8455], 99.95th=[ 8979], 00:26:31.173 | 99.99th=[ 9896] 00:26:31.173 bw ( KiB/s): min=42304, max=45704, per=99.94%, avg=44710.00, stdev=1613.56, samples=4 00:26:31.173 iops : min=10576, max=11426, avg=11177.50, stdev=403.39, samples=4 00:26:31.173 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(87.2MiB/2005msec); 0 zone resets 00:26:31.173 slat (nsec): min=1324, max=17565, avg=1749.93, stdev=335.19 00:26:31.173 clat (usec): min=3101, max=9908, avg=5656.49, stdev=191.44 00:26:31.173 lat (usec): min=3106, max=9910, avg=5658.24, stdev=191.43 00:26:31.173 clat percentiles (usec): 00:26:31.173 | 1.00th=[ 5538], 5.00th=[ 5604], 10.00th=[ 5604], 20.00th=[ 5604], 00:26:31.173 | 30.00th=[ 5604], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5669], 00:26:31.173 | 70.00th=[ 5669], 80.00th=[ 5669], 90.00th=[ 5669], 95.00th=[ 5735], 00:26:31.173 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 8979], 00:26:31.173 | 99.99th=[ 9896] 00:26:31.173 bw ( KiB/s): min=42696, max=45656, per=99.99%, avg=44548.00, stdev=1293.69, samples=4 00:26:31.173 iops : min=10674, max=11414, avg=11137.00, stdev=323.42, samples=4 00:26:31.173 lat (msec) : 4=0.07%, 10=99.93% 00:26:31.173 cpu : usr=99.55%, sys=0.10%, ctx=16, majf=0, minf=2 00:26:31.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:31.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:31.173 issued rwts: total=22425,22331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:31.173 00:26:31.173 Run status group 0 (all jobs): 00:26:31.173 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=87.6MiB (91.9MB), run=2005-2005msec 00:26:31.173 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=87.2MiB (91.5MB), run=2005-2005msec 00:26:31.173 11:20:51 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:31.173 11:20:51 -- host/fio.sh@74 -- # sync 00:26:31.173 11:20:51 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:26:46.056 11:21:06 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:46.056 11:21:06 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:26:58.267 11:21:17 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:58.267 11:21:17 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:02.459 11:21:22 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:02.459 11:21:22 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:02.459 11:21:22 -- host/fio.sh@86 -- # nvmftestfini 00:27:02.459 11:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:02.459 11:21:22 -- nvmf/common.sh@116 -- # sync 00:27:02.459 11:21:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:02.459 11:21:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:02.459 11:21:22 -- nvmf/common.sh@119 -- # set +e 00:27:02.459 11:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:02.459 11:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:02.459 rmmod nvme_rdma 00:27:02.459 rmmod nvme_fabrics 00:27:02.459 11:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:02.459 11:21:22 -- nvmf/common.sh@123 -- # set -e 00:27:02.459 11:21:22 -- nvmf/common.sh@124 -- # return 0 00:27:02.459 11:21:22 -- nvmf/common.sh@477 -- # '[' -n 1749624 ']' 00:27:02.459 11:21:22 -- nvmf/common.sh@478 -- # killprocess 1749624 00:27:02.459 11:21:22 -- common/autotest_common.sh@936 -- # '[' -z 1749624 ']' 00:27:02.459 11:21:22 -- common/autotest_common.sh@940 -- # kill -0 1749624 00:27:02.459 11:21:22 -- common/autotest_common.sh@941 -- # uname 00:27:02.459 11:21:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.459 11:21:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1749624 00:27:02.459 11:21:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.459 11:21:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.459 11:21:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1749624' 00:27:02.459 killing process with pid 1749624 00:27:02.459 11:21:22 -- common/autotest_common.sh@955 -- # kill 1749624 00:27:02.459 11:21:22 -- common/autotest_common.sh@960 -- # wait 1749624 00:27:02.459 11:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:02.459 11:21:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:02.459 00:27:02.459 real 1m10.330s 00:27:02.459 user 5m1.840s 00:27:02.459 sys 0m6.418s 00:27:02.459 11:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:02.459 11:21:22 -- common/autotest_common.sh@10 -- # set +x 00:27:02.459 ************************************ 00:27:02.459 END TEST nvmf_fio_host 00:27:02.459 ************************************ 00:27:02.459 11:21:22 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:02.459 11:21:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:02.459 11:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.459 11:21:22 -- common/autotest_common.sh@10 -- # set +x 00:27:02.459 ************************************ 00:27:02.459 START TEST nvmf_failover 00:27:02.459 ************************************ 00:27:02.459 11:21:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:02.459 * Looking for test storage... 00:27:02.459 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:02.459 11:21:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:02.459 11:21:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:02.459 11:21:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:02.459 11:21:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:02.460 11:21:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:02.460 11:21:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:02.460 11:21:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:02.460 11:21:22 -- scripts/common.sh@335 -- # IFS=.-: 00:27:02.460 11:21:22 -- scripts/common.sh@335 -- # read -ra ver1 00:27:02.460 11:21:22 -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.460 11:21:22 -- scripts/common.sh@336 -- # read -ra ver2 00:27:02.460 11:21:22 -- scripts/common.sh@337 -- # local 'op=<' 00:27:02.460 11:21:22 -- scripts/common.sh@339 -- # ver1_l=2 00:27:02.460 11:21:22 -- scripts/common.sh@340 -- # ver2_l=1 00:27:02.460 11:21:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:02.460 11:21:22 -- scripts/common.sh@343 -- # case "$op" in 00:27:02.460 11:21:22 -- scripts/common.sh@344 -- # : 1 00:27:02.460 11:21:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:02.460 11:21:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.460 11:21:22 -- scripts/common.sh@364 -- # decimal 1 00:27:02.460 11:21:22 -- scripts/common.sh@352 -- # local d=1 00:27:02.460 11:21:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.460 11:21:22 -- scripts/common.sh@354 -- # echo 1 00:27:02.460 11:21:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:02.460 11:21:22 -- scripts/common.sh@365 -- # decimal 2 00:27:02.460 11:21:22 -- scripts/common.sh@352 -- # local d=2 00:27:02.460 11:21:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.460 11:21:22 -- scripts/common.sh@354 -- # echo 2 00:27:02.460 11:21:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:02.460 11:21:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:02.460 11:21:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:02.460 11:21:22 -- scripts/common.sh@367 -- # return 0 00:27:02.460 11:21:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.460 11:21:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:02.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.460 --rc genhtml_branch_coverage=1 00:27:02.460 --rc genhtml_function_coverage=1 00:27:02.460 --rc genhtml_legend=1 00:27:02.460 --rc geninfo_all_blocks=1 00:27:02.460 --rc geninfo_unexecuted_blocks=1 00:27:02.460 00:27:02.460 ' 00:27:02.460 11:21:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:02.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.460 --rc genhtml_branch_coverage=1 00:27:02.460 --rc genhtml_function_coverage=1 00:27:02.460 --rc genhtml_legend=1 00:27:02.460 --rc geninfo_all_blocks=1 00:27:02.460 --rc geninfo_unexecuted_blocks=1 00:27:02.460 00:27:02.460 ' 00:27:02.460 11:21:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:02.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.460 --rc genhtml_branch_coverage=1 00:27:02.460 --rc genhtml_function_coverage=1 00:27:02.460 --rc genhtml_legend=1 00:27:02.460 --rc geninfo_all_blocks=1 00:27:02.460 --rc geninfo_unexecuted_blocks=1 00:27:02.460 00:27:02.460 ' 00:27:02.460 11:21:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:02.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.460 --rc genhtml_branch_coverage=1 00:27:02.460 --rc genhtml_function_coverage=1 00:27:02.460 --rc genhtml_legend=1 00:27:02.460 --rc geninfo_all_blocks=1 00:27:02.460 --rc geninfo_unexecuted_blocks=1 00:27:02.460 00:27:02.460 ' 00:27:02.460 11:21:22 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.460 11:21:22 -- nvmf/common.sh@7 -- # uname -s 00:27:02.460 11:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.460 11:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.460 11:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.460 11:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.460 11:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.460 11:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.460 11:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.460 11:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.460 11:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.460 11:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.460 11:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:27:02.460 11:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:27:02.460 11:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.460 11:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.460 11:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.460 11:21:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:02.460 11:21:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.460 11:21:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.460 11:21:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.460 11:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.460 11:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.460 11:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.460 11:21:22 -- paths/export.sh@5 -- # export PATH 00:27:02.460 11:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.460 11:21:22 -- nvmf/common.sh@46 -- # : 0 00:27:02.460 11:21:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:02.460 11:21:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:02.460 11:21:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:02.460 11:21:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.460 11:21:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.460 11:21:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:02.460 11:21:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:02.460 11:21:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:02.460 11:21:22 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:02.460 11:21:22 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:02.460 11:21:22 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:02.460 11:21:22 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:02.460 11:21:22 -- host/failover.sh@18 -- # nvmftestinit 00:27:02.460 11:21:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:02.460 11:21:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.460 11:21:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:02.460 11:21:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:02.460 11:21:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:02.460 11:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.460 11:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.460 11:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.460 11:21:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:02.460 11:21:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:02.460 11:21:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:02.460 11:21:22 -- common/autotest_common.sh@10 -- # set +x 00:27:07.736 11:21:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:07.736 11:21:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:07.736 11:21:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:07.736 11:21:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:07.736 11:21:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:07.736 11:21:28 -- nvmf/common.sh@294 -- # net_devs=() 00:27:07.736 11:21:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@295 -- # e810=() 00:27:07.736 11:21:28 -- nvmf/common.sh@295 -- # local -ga e810 00:27:07.736 11:21:28 -- nvmf/common.sh@296 -- # x722=() 00:27:07.736 11:21:28 -- nvmf/common.sh@296 -- # local -ga x722 00:27:07.736 11:21:28 -- nvmf/common.sh@297 -- # mlx=() 00:27:07.736 11:21:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:07.736 11:21:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.736 11:21:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:07.736 11:21:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:07.736 11:21:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:07.736 11:21:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:07.736 11:21:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:27:07.736 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:27:07.736 11:21:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:07.736 11:21:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:27:07.736 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:27:07.736 11:21:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:07.736 11:21:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.736 11:21:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.736 11:21:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:27:07.736 Found net devices under 0000:18:00.0: mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.736 11:21:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.736 11:21:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.736 11:21:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:27:07.736 Found net devices under 0000:18:00.1: mlx_0_1 00:27:07.736 11:21:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.736 11:21:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:07.736 11:21:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:07.736 11:21:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:07.736 11:21:28 -- nvmf/common.sh@57 -- # uname 00:27:07.736 11:21:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:07.736 11:21:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:07.736 11:21:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:07.736 11:21:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:07.736 11:21:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:07.736 11:21:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:07.736 11:21:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:07.736 11:21:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:07.736 11:21:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:07.736 11:21:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:07.736 11:21:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:07.736 11:21:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:07.736 11:21:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:07.736 11:21:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@104 -- # continue 2 00:27:07.736 11:21:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:07.736 11:21:28 -- nvmf/common.sh@104 -- # continue 2 00:27:07.736 11:21:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:07.736 11:21:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:07.736 11:21:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:07.736 11:21:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:07.736 11:21:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:07.736 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:07.736 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:27:07.736 altname enp24s0f0np0 00:27:07.736 altname ens785f0np0 00:27:07.736 inet 192.168.100.8/24 scope global mlx_0_0 00:27:07.736 valid_lft forever preferred_lft forever 00:27:07.736 11:21:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:07.736 11:21:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:07.736 11:21:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:07.736 11:21:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:07.736 11:21:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:07.736 11:21:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:07.736 11:21:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:07.736 11:21:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:07.736 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:07.736 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:27:07.736 altname enp24s0f1np1 00:27:07.736 altname ens785f1np1 00:27:07.736 inet 192.168.100.9/24 scope global mlx_0_1 00:27:07.736 valid_lft forever preferred_lft forever 00:27:07.736 11:21:28 -- nvmf/common.sh@410 -- # return 0 00:27:07.736 11:21:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:07.736 11:21:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:07.736 11:21:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:07.736 11:21:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:07.736 11:21:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:07.736 11:21:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:07.736 11:21:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:07.736 11:21:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:07.736 11:21:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@104 -- # continue 2 00:27:07.736 11:21:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:07.736 11:21:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:07.736 11:21:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:07.736 11:21:28 -- nvmf/common.sh@104 -- # continue 2 00:27:07.736 11:21:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:07.736 11:21:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:07.736 11:21:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:07.737 11:21:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:07.737 11:21:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:07.737 11:21:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:07.996 11:21:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:07.996 11:21:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:07.996 11:21:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:07.996 11:21:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:07.996 11:21:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:07.996 11:21:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:07.996 11:21:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:07.996 192.168.100.9' 00:27:07.996 11:21:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:07.996 192.168.100.9' 00:27:07.996 11:21:28 -- nvmf/common.sh@445 -- # head -n 1 00:27:07.996 11:21:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:07.996 11:21:28 -- nvmf/common.sh@446 -- # head -n 1 00:27:07.996 11:21:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:07.996 192.168.100.9' 00:27:07.996 11:21:28 -- nvmf/common.sh@446 -- # tail -n +2 00:27:07.996 11:21:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:07.996 11:21:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:07.996 11:21:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:07.996 11:21:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:07.996 11:21:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:07.996 11:21:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:07.996 11:21:28 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:07.996 11:21:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:07.996 11:21:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.996 11:21:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.996 11:21:28 -- nvmf/common.sh@469 -- # nvmfpid=1765524 00:27:07.996 11:21:28 -- nvmf/common.sh@470 -- # waitforlisten 1765524 00:27:07.996 11:21:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:07.996 11:21:28 -- common/autotest_common.sh@829 -- # '[' -z 1765524 ']' 00:27:07.996 11:21:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.996 11:21:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.996 11:21:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.996 11:21:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.996 11:21:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.996 [2024-12-13 11:21:28.413120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:07.996 [2024-12-13 11:21:28.413173] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.996 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.996 [2024-12-13 11:21:28.465563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:07.996 [2024-12-13 11:21:28.539445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:07.996 [2024-12-13 11:21:28.539544] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.996 [2024-12-13 11:21:28.539551] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.996 [2024-12-13 11:21:28.539557] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.996 [2024-12-13 11:21:28.539650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.996 [2024-12-13 11:21:28.539752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.996 [2024-12-13 11:21:28.539753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.933 11:21:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.933 11:21:29 -- common/autotest_common.sh@862 -- # return 0 00:27:08.933 11:21:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:08.933 11:21:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.933 11:21:29 -- common/autotest_common.sh@10 -- # set +x 00:27:08.933 11:21:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.933 11:21:29 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:08.933 [2024-12-13 11:21:29.421044] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1af3140/0x1af7630) succeed. 00:27:08.933 [2024-12-13 11:21:29.428966] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1af4690/0x1b38cd0) succeed. 00:27:09.192 11:21:29 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:09.192 Malloc0 00:27:09.192 11:21:29 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.451 11:21:29 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.709 11:21:30 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:09.709 [2024-12-13 11:21:30.250574] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:09.968 11:21:30 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:09.968 [2024-12-13 11:21:30.422876] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:09.968 11:21:30 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:10.227 [2024-12-13 11:21:30.595447] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:10.227 11:21:30 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:10.227 11:21:30 -- host/failover.sh@31 -- # bdevperf_pid=1765827 00:27:10.227 11:21:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.227 11:21:30 -- host/failover.sh@34 -- # waitforlisten 1765827 /var/tmp/bdevperf.sock 00:27:10.227 11:21:30 -- common/autotest_common.sh@829 -- # '[' -z 1765827 ']' 00:27:10.227 11:21:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:10.227 11:21:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.227 11:21:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:10.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:10.227 11:21:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.227 11:21:30 -- common/autotest_common.sh@10 -- # set +x 00:27:11.164 11:21:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.164 11:21:31 -- common/autotest_common.sh@862 -- # return 0 00:27:11.164 11:21:31 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.164 NVMe0n1 00:27:11.164 11:21:31 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.423 00:27:11.423 11:21:31 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.423 11:21:31 -- host/failover.sh@39 -- # run_test_pid=1766097 00:27:11.423 11:21:31 -- host/failover.sh@41 -- # sleep 1 00:27:12.802 11:21:32 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:12.802 11:21:33 -- host/failover.sh@45 -- # sleep 3 00:27:16.097 11:21:36 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.097 00:27:16.097 11:21:36 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:16.097 11:21:36 -- host/failover.sh@50 -- # sleep 3 00:27:19.384 11:21:39 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:19.384 [2024-12-13 11:21:39.683954] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:19.384 11:21:39 -- host/failover.sh@55 -- # sleep 1 00:27:20.321 11:21:40 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:20.321 11:21:40 -- host/failover.sh@59 -- # wait 1766097 00:27:26.895 0 00:27:26.895 11:21:47 -- host/failover.sh@61 -- # killprocess 1765827 00:27:26.895 11:21:47 -- common/autotest_common.sh@936 -- # '[' -z 1765827 ']' 00:27:26.895 11:21:47 -- common/autotest_common.sh@940 -- # kill -0 1765827 00:27:26.895 11:21:47 -- common/autotest_common.sh@941 -- # uname 00:27:26.895 11:21:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:26.895 11:21:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1765827 00:27:26.895 11:21:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:26.895 11:21:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:26.895 11:21:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1765827' 00:27:26.895 killing process with pid 1765827 00:27:26.895 11:21:47 -- common/autotest_common.sh@955 -- # kill 1765827 00:27:26.895 11:21:47 -- common/autotest_common.sh@960 -- # wait 1765827 00:27:26.895 11:21:47 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.895 [2024-12-13 11:21:30.646961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:26.895 [2024-12-13 11:21:30.647004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765827 ] 00:27:26.895 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.895 [2024-12-13 11:21:30.698323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.895 [2024-12-13 11:21:30.764996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.895 Running I/O for 15 seconds... 00:27:26.895 [2024-12-13 11:21:34.101934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.895 [2024-12-13 11:21:34.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.101984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.896 [2024-12-13 11:21:34.101991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.101998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.896 [2024-12-13 11:21:34.102004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.102011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.896 [2024-12-13 11:21:34.102017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:26.896 [2024-12-13 11:21:34.103776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:26.896 [2024-12-13 11:21:34.103789] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:26.896 [2024-12-13 11:21:34.103796] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:26.896 [2024-12-13 11:21:34.103811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.103819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.103858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.103893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.103914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.103934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.103973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.103987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.103994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182700 00:27:26.896 [2024-12-13 11:21:34.104621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.896 [2024-12-13 11:21:34.104641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180800 00:27:26.896 [2024-12-13 11:21:34.104660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.896 [2024-12-13 11:21:34.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.104680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.104714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.104747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.104768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.104802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.104856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.104879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.104898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.104919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.104939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.104958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.104970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.104978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180800 00:27:26.897 [2024-12-13 11:21:34.105697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.897 [2024-12-13 11:21:34.105730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.897 [2024-12-13 11:21:34.105756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x182700 00:27:26.897 [2024-12-13 11:21:34.105763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.105797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.105831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182700 00:27:26.898 [2024-12-13 11:21:34.105865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.105884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.105919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.105939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.105973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.105999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.106041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x182700 00:27:26.898 [2024-12-13 11:21:34.106096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x182700 00:27:26.898 [2024-12-13 11:21:34.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.106250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.106276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182700 00:27:26.898 [2024-12-13 11:21:34.106296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.106330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.106371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x180800 00:27:26.898 [2024-12-13 11:21:34.106542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x182700 00:27:26.898 [2024-12-13 11:21:34.106618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.898 [2024-12-13 11:21:34.106772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.898 [2024-12-13 11:21:34.106778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x182700 00:27:26.899 [2024-12-13 11:21:34.106800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182700 00:27:26.899 [2024-12-13 11:21:34.106834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:34.106870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182700 00:27:26.899 [2024-12-13 11:21:34.106904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.106936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:34.106958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.106985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:34.106992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:34.107046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:34.107067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x182700 00:27:26.899 [2024-12-13 11:21:34.107140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:34.107261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.107312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:34.107320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.120712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.899 [2024-12-13 11:21:34.120730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.899 [2024-12-13 11:21:34.120737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103600 len:8 PRP1 0x0 PRP2 0x0 00:27:26.899 [2024-12-13 11:21:34.120744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:34.120807] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:27:26.899 [2024-12-13 11:21:34.120816] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:26.899 [2024-12-13 11:21:34.120839] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:26.899 [2024-12-13 11:21:34.122537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:26.899 [2024-12-13 11:21:34.154143] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:26.899 [2024-12-13 11:21:37.519108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:37.519147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:37.519169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182900 00:27:26.899 [2024-12-13 11:21:37.519185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:37.519204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:37.519218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:37.519232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x182900 00:27:26.899 [2024-12-13 11:21:37.519246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:37.519260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:37.519276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:37.519290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.899 [2024-12-13 11:21:37.519305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182900 00:27:26.899 [2024-12-13 11:21:37.519318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:37.519332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x182900 00:27:26.899 [2024-12-13 11:21:37.519345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x180800 00:27:26.899 [2024-12-13 11:21:37.519362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.899 [2024-12-13 11:21:37.519370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.900 [2024-12-13 11:21:37.519724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x180800 00:27:26.900 [2024-12-13 11:21:37.519777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x182900 00:27:26.900 [2024-12-13 11:21:37.519804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.900 [2024-12-13 11:21:37.519811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.519817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.519831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.519844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.519861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.519875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.519889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.519902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.519916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.519930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.519944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.519957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.519971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.519985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.519993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.519999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.520027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.520055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.520083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.520250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.901 [2024-12-13 11:21:37.520281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.520294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180800 00:27:26.901 [2024-12-13 11:21:37.520322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.901 [2024-12-13 11:21:37.520344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x182900 00:27:26.901 [2024-12-13 11:21:37.520351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x180800 00:27:26.902 [2024-12-13 11:21:37.520379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180800 00:27:26.902 [2024-12-13 11:21:37.520550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180800 00:27:26.902 [2024-12-13 11:21:37.520605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180800 00:27:26.902 [2024-12-13 11:21:37.520619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x180800 00:27:26.902 [2024-12-13 11:21:37.520797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182900 00:27:26.902 [2024-12-13 11:21:37.520854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.902 [2024-12-13 11:21:37.520869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180800 00:27:26.902 [2024-12-13 11:21:37.520882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.902 [2024-12-13 11:21:37.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:37.520895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.520903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:37.520909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.520917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182900 00:27:26.903 [2024-12-13 11:21:37.520924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.529998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:37.530019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.531931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.903 [2024-12-13 11:21:37.531943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.903 [2024-12-13 11:21:37.531950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93368 len:8 PRP1 0x0 PRP2 0x0 00:27:26.903 [2024-12-13 11:21:37.531956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.531992] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:26.903 [2024-12-13 11:21:37.532001] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:27:26.903 [2024-12-13 11:21:37.532008] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:26.903 [2024-12-13 11:21:37.532037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.903 [2024-12-13 11:21:37.532046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.532054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.903 [2024-12-13 11:21:37.532060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.532072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.903 [2024-12-13 11:21:37.532079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.532087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.903 [2024-12-13 11:21:37.532095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:37.547749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:26.903 [2024-12-13 11:21:37.547764] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:26.903 [2024-12-13 11:21:37.547771] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:26.903 [2024-12-13 11:21:37.549402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:26.903 [2024-12-13 11:21:37.582449] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:26.903 [2024-12-13 11:21:41.865402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:41.865505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:41.865579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:41.865607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:27:26.903 [2024-12-13 11:21:41.865696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:41.865742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:41.865771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x182700 00:27:26.903 [2024-12-13 11:21:41.865785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.903 [2024-12-13 11:21:41.865803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.903 [2024-12-13 11:21:41.865812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.865849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.865863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.865891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.865925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.865989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.865995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.866009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.866077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.866146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.866174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.866244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.904 [2024-12-13 11:21:41.866258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182700 00:27:26.904 [2024-12-13 11:21:41.866276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180800 00:27:26.904 [2024-12-13 11:21:41.866289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.904 [2024-12-13 11:21:41.866297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.905 [2024-12-13 11:21:41.866805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182700 00:27:26.905 [2024-12-13 11:21:41.866818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.905 [2024-12-13 11:21:41.866826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180800 00:27:26.905 [2024-12-13 11:21:41.866832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.866860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.866874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.866889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.866903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.866917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.866930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.866944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.866958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.866985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.866992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.866998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.867025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.867038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.867053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.867067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.867081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.867095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.867108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.867122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.867136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.867149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.867163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.867177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.867185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x180800 00:27:26.906 [2024-12-13 11:21:41.867191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.876140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x182700 00:27:26.906 [2024-12-13 11:21:41.876155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.876168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.876177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.876188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.906 [2024-12-13 11:21:41.876196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92918000 sqhd:5310 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.877966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.906 [2024-12-13 11:21:41.877979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.906 [2024-12-13 11:21:41.877988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28584 len:8 PRP1 0x0 PRP2 0x0 00:27:26.906 [2024-12-13 11:21:41.877997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.878038] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:26.906 [2024-12-13 11:21:41.878049] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:27:26.906 [2024-12-13 11:21:41.878058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:26.906 [2024-12-13 11:21:41.878091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.906 [2024-12-13 11:21:41.878102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.878114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.906 [2024-12-13 11:21:41.878123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.878133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.906 [2024-12-13 11:21:41.878142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.878151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.906 [2024-12-13 11:21:41.878159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.906 [2024-12-13 11:21:41.895145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:26.906 [2024-12-13 11:21:41.895170] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:26.906 [2024-12-13 11:21:41.895180] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:26.906 [2024-12-13 11:21:41.896887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:26.906 [2024-12-13 11:21:41.924779] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:26.906 00:27:26.906 Latency(us) 00:27:26.907 [2024-12-13T10:21:47.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.907 [2024-12-13T10:21:47.476Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.907 Verification LBA range: start 0x0 length 0x4000 00:27:26.907 NVMe0n1 : 15.00 21314.71 83.26 312.72 0.00 5907.58 317.06 1043915.66 00:27:26.907 [2024-12-13T10:21:47.476Z] =================================================================================================================== 00:27:26.907 [2024-12-13T10:21:47.476Z] Total : 21314.71 83.26 312.72 0.00 5907.58 317.06 1043915.66 00:27:26.907 Received shutdown signal, test time was about 15.000000 seconds 00:27:26.907 00:27:26.907 Latency(us) 00:27:26.907 [2024-12-13T10:21:47.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.907 [2024-12-13T10:21:47.476Z] =================================================================================================================== 00:27:26.907 [2024-12-13T10:21:47.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.907 11:21:47 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:26.907 11:21:47 -- host/failover.sh@65 -- # count=3 00:27:26.907 11:21:47 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:26.907 11:21:47 -- host/failover.sh@73 -- # bdevperf_pid=1768765 00:27:26.907 11:21:47 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:26.907 11:21:47 -- host/failover.sh@75 -- # waitforlisten 1768765 /var/tmp/bdevperf.sock 00:27:26.907 11:21:47 -- common/autotest_common.sh@829 -- # '[' -z 1768765 ']' 00:27:26.907 11:21:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.907 11:21:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.907 11:21:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.907 11:21:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.907 11:21:47 -- common/autotest_common.sh@10 -- # set +x 00:27:27.844 11:21:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.844 11:21:48 -- common/autotest_common.sh@862 -- # return 0 00:27:27.844 11:21:48 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:27.844 [2024-12-13 11:21:48.321076] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:27.844 11:21:48 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:28.103 [2024-12-13 11:21:48.485621] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:28.103 11:21:48 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.362 NVMe0n1 00:27:28.362 11:21:48 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.621 00:27:28.621 11:21:48 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.621 00:27:28.880 11:21:49 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:28.880 11:21:49 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:28.880 11:21:49 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:29.139 11:21:49 -- host/failover.sh@87 -- # sleep 3 00:27:32.427 11:21:52 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:32.427 11:21:52 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:32.427 11:21:52 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:32.427 11:21:52 -- host/failover.sh@90 -- # run_test_pid=1769837 00:27:32.427 11:21:52 -- host/failover.sh@92 -- # wait 1769837 00:27:33.364 0 00:27:33.364 11:21:53 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:33.364 [2024-12-13 11:21:47.383492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:33.364 [2024-12-13 11:21:47.383540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768765 ] 00:27:33.364 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.364 [2024-12-13 11:21:47.435868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.364 [2024-12-13 11:21:47.498207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.364 [2024-12-13 11:21:49.535375] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:33.364 [2024-12-13 11:21:49.536033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.364 [2024-12-13 11:21:49.536058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.364 [2024-12-13 11:21:49.554787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.364 [2024-12-13 11:21:49.570387] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:33.364 Running I/O for 1 seconds... 00:27:33.364 00:27:33.364 Latency(us) 00:27:33.364 [2024-12-13T10:21:53.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.364 [2024-12-13T10:21:53.933Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.364 Verification LBA range: start 0x0 length 0x4000 00:27:33.364 NVMe0n1 : 1.00 26897.44 105.07 0.00 0.00 4737.05 1049.79 9951.76 00:27:33.364 [2024-12-13T10:21:53.933Z] =================================================================================================================== 00:27:33.364 [2024-12-13T10:21:53.933Z] Total : 26897.44 105.07 0.00 0.00 4737.05 1049.79 9951.76 00:27:33.364 11:21:53 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:33.364 11:21:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:33.623 11:21:54 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:33.888 11:21:54 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:33.888 11:21:54 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:33.888 11:21:54 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:34.146 11:21:54 -- host/failover.sh@101 -- # sleep 3 00:27:37.429 11:21:57 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:37.429 11:21:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:37.429 11:21:57 -- host/failover.sh@108 -- # killprocess 1768765 00:27:37.429 11:21:57 -- common/autotest_common.sh@936 -- # '[' -z 1768765 ']' 00:27:37.429 11:21:57 -- common/autotest_common.sh@940 -- # kill -0 1768765 00:27:37.429 11:21:57 -- common/autotest_common.sh@941 -- # uname 00:27:37.429 11:21:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.429 11:21:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1768765 00:27:37.429 11:21:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:37.429 11:21:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:37.429 11:21:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1768765' 00:27:37.429 killing process with pid 1768765 00:27:37.429 11:21:57 -- common/autotest_common.sh@955 -- # kill 1768765 00:27:37.429 11:21:57 -- common/autotest_common.sh@960 -- # wait 1768765 00:27:37.429 11:21:57 -- host/failover.sh@110 -- # sync 00:27:37.429 11:21:57 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.687 11:21:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:37.687 11:21:58 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:37.687 11:21:58 -- host/failover.sh@116 -- # nvmftestfini 00:27:37.687 11:21:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:37.687 11:21:58 -- nvmf/common.sh@116 -- # sync 00:27:37.687 11:21:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:37.687 11:21:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:37.687 11:21:58 -- nvmf/common.sh@119 -- # set +e 00:27:37.687 11:21:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:37.687 11:21:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:37.687 rmmod nvme_rdma 00:27:37.687 rmmod nvme_fabrics 00:27:37.687 11:21:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:37.687 11:21:58 -- nvmf/common.sh@123 -- # set -e 00:27:37.687 11:21:58 -- nvmf/common.sh@124 -- # return 0 00:27:37.687 11:21:58 -- nvmf/common.sh@477 -- # '[' -n 1765524 ']' 00:27:37.687 11:21:58 -- nvmf/common.sh@478 -- # killprocess 1765524 00:27:37.687 11:21:58 -- common/autotest_common.sh@936 -- # '[' -z 1765524 ']' 00:27:37.687 11:21:58 -- common/autotest_common.sh@940 -- # kill -0 1765524 00:27:37.687 11:21:58 -- common/autotest_common.sh@941 -- # uname 00:27:37.687 11:21:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.687 11:21:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1765524 00:27:37.947 11:21:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:37.947 11:21:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:37.947 11:21:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1765524' 00:27:37.947 killing process with pid 1765524 00:27:37.947 11:21:58 -- common/autotest_common.sh@955 -- # kill 1765524 00:27:37.947 11:21:58 -- common/autotest_common.sh@960 -- # wait 1765524 00:27:38.206 11:21:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:38.206 11:21:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:38.206 00:27:38.206 real 0m35.805s 00:27:38.206 user 2m2.800s 00:27:38.206 sys 0m5.902s 00:27:38.206 11:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:38.206 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:27:38.206 ************************************ 00:27:38.206 END TEST nvmf_failover 00:27:38.206 ************************************ 00:27:38.206 11:21:58 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:38.206 11:21:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:38.206 11:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.206 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:27:38.206 ************************************ 00:27:38.206 START TEST nvmf_discovery 00:27:38.206 ************************************ 00:27:38.206 11:21:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:27:38.206 * Looking for test storage... 00:27:38.206 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:38.206 11:21:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:38.206 11:21:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:38.206 11:21:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:38.206 11:21:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:38.206 11:21:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:38.206 11:21:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:38.206 11:21:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:38.206 11:21:58 -- scripts/common.sh@335 -- # IFS=.-: 00:27:38.206 11:21:58 -- scripts/common.sh@335 -- # read -ra ver1 00:27:38.206 11:21:58 -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.206 11:21:58 -- scripts/common.sh@336 -- # read -ra ver2 00:27:38.206 11:21:58 -- scripts/common.sh@337 -- # local 'op=<' 00:27:38.206 11:21:58 -- scripts/common.sh@339 -- # ver1_l=2 00:27:38.206 11:21:58 -- scripts/common.sh@340 -- # ver2_l=1 00:27:38.206 11:21:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:38.206 11:21:58 -- scripts/common.sh@343 -- # case "$op" in 00:27:38.206 11:21:58 -- scripts/common.sh@344 -- # : 1 00:27:38.206 11:21:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:38.206 11:21:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.206 11:21:58 -- scripts/common.sh@364 -- # decimal 1 00:27:38.206 11:21:58 -- scripts/common.sh@352 -- # local d=1 00:27:38.206 11:21:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.206 11:21:58 -- scripts/common.sh@354 -- # echo 1 00:27:38.206 11:21:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:38.206 11:21:58 -- scripts/common.sh@365 -- # decimal 2 00:27:38.206 11:21:58 -- scripts/common.sh@352 -- # local d=2 00:27:38.206 11:21:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.206 11:21:58 -- scripts/common.sh@354 -- # echo 2 00:27:38.206 11:21:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:38.206 11:21:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:38.206 11:21:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:38.206 11:21:58 -- scripts/common.sh@367 -- # return 0 00:27:38.206 11:21:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.206 11:21:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.206 --rc genhtml_branch_coverage=1 00:27:38.206 --rc genhtml_function_coverage=1 00:27:38.206 --rc genhtml_legend=1 00:27:38.206 --rc geninfo_all_blocks=1 00:27:38.206 --rc geninfo_unexecuted_blocks=1 00:27:38.206 00:27:38.206 ' 00:27:38.206 11:21:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.206 --rc genhtml_branch_coverage=1 00:27:38.206 --rc genhtml_function_coverage=1 00:27:38.206 --rc genhtml_legend=1 00:27:38.206 --rc geninfo_all_blocks=1 00:27:38.206 --rc geninfo_unexecuted_blocks=1 00:27:38.206 00:27:38.206 ' 00:27:38.206 11:21:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.206 --rc genhtml_branch_coverage=1 00:27:38.206 --rc genhtml_function_coverage=1 00:27:38.206 --rc genhtml_legend=1 00:27:38.206 --rc geninfo_all_blocks=1 00:27:38.206 --rc geninfo_unexecuted_blocks=1 00:27:38.206 00:27:38.206 ' 00:27:38.206 11:21:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:38.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.206 --rc genhtml_branch_coverage=1 00:27:38.206 --rc genhtml_function_coverage=1 00:27:38.206 --rc genhtml_legend=1 00:27:38.206 --rc geninfo_all_blocks=1 00:27:38.206 --rc geninfo_unexecuted_blocks=1 00:27:38.206 00:27:38.206 ' 00:27:38.206 11:21:58 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.206 11:21:58 -- nvmf/common.sh@7 -- # uname -s 00:27:38.206 11:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.206 11:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.206 11:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.206 11:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.206 11:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.206 11:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.206 11:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.206 11:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.206 11:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.206 11:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.466 11:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:27:38.466 11:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:27:38.466 11:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.466 11:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.466 11:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.466 11:21:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:38.466 11:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.466 11:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.466 11:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.466 11:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.466 11:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.466 11:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.466 11:21:58 -- paths/export.sh@5 -- # export PATH 00:27:38.467 11:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.467 11:21:58 -- nvmf/common.sh@46 -- # : 0 00:27:38.467 11:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:38.467 11:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:38.467 11:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:38.467 11:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.467 11:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.467 11:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:38.467 11:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:38.467 11:21:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:38.467 11:21:58 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:27:38.467 11:21:58 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:38.467 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:38.467 11:21:58 -- host/discovery.sh@13 -- # exit 0 00:27:38.467 00:27:38.467 real 0m0.187s 00:27:38.467 user 0m0.109s 00:27:38.467 sys 0m0.090s 00:27:38.467 11:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:38.467 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:27:38.467 ************************************ 00:27:38.467 END TEST nvmf_discovery 00:27:38.467 ************************************ 00:27:38.467 11:21:58 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:27:38.467 11:21:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:38.467 11:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.467 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:27:38.467 ************************************ 00:27:38.467 START TEST nvmf_discovery_remove_ifc 00:27:38.467 ************************************ 00:27:38.467 11:21:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:27:38.467 * Looking for test storage... 00:27:38.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:38.467 11:21:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:38.467 11:21:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:38.467 11:21:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:38.467 11:21:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:38.467 11:21:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:38.467 11:21:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:38.467 11:21:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:38.467 11:21:58 -- scripts/common.sh@335 -- # IFS=.-: 00:27:38.467 11:21:58 -- scripts/common.sh@335 -- # read -ra ver1 00:27:38.467 11:21:58 -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.467 11:21:58 -- scripts/common.sh@336 -- # read -ra ver2 00:27:38.467 11:21:58 -- scripts/common.sh@337 -- # local 'op=<' 00:27:38.467 11:21:58 -- scripts/common.sh@339 -- # ver1_l=2 00:27:38.467 11:21:58 -- scripts/common.sh@340 -- # ver2_l=1 00:27:38.467 11:21:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:38.467 11:21:58 -- scripts/common.sh@343 -- # case "$op" in 00:27:38.467 11:21:58 -- scripts/common.sh@344 -- # : 1 00:27:38.467 11:21:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:38.467 11:21:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.467 11:21:58 -- scripts/common.sh@364 -- # decimal 1 00:27:38.467 11:21:58 -- scripts/common.sh@352 -- # local d=1 00:27:38.467 11:21:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.467 11:21:58 -- scripts/common.sh@354 -- # echo 1 00:27:38.467 11:21:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:38.467 11:21:58 -- scripts/common.sh@365 -- # decimal 2 00:27:38.467 11:21:58 -- scripts/common.sh@352 -- # local d=2 00:27:38.467 11:21:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.467 11:21:58 -- scripts/common.sh@354 -- # echo 2 00:27:38.467 11:21:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:38.467 11:21:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:38.467 11:21:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:38.467 11:21:58 -- scripts/common.sh@367 -- # return 0 00:27:38.467 11:21:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.467 11:21:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.467 --rc genhtml_branch_coverage=1 00:27:38.467 --rc genhtml_function_coverage=1 00:27:38.467 --rc genhtml_legend=1 00:27:38.467 --rc geninfo_all_blocks=1 00:27:38.467 --rc geninfo_unexecuted_blocks=1 00:27:38.467 00:27:38.467 ' 00:27:38.467 11:21:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.467 --rc genhtml_branch_coverage=1 00:27:38.467 --rc genhtml_function_coverage=1 00:27:38.467 --rc genhtml_legend=1 00:27:38.467 --rc geninfo_all_blocks=1 00:27:38.467 --rc geninfo_unexecuted_blocks=1 00:27:38.467 00:27:38.467 ' 00:27:38.467 11:21:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.467 --rc genhtml_branch_coverage=1 00:27:38.467 --rc genhtml_function_coverage=1 00:27:38.467 --rc genhtml_legend=1 00:27:38.467 --rc geninfo_all_blocks=1 00:27:38.467 --rc geninfo_unexecuted_blocks=1 00:27:38.467 00:27:38.467 ' 00:27:38.467 11:21:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:38.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.467 --rc genhtml_branch_coverage=1 00:27:38.467 --rc genhtml_function_coverage=1 00:27:38.467 --rc genhtml_legend=1 00:27:38.467 --rc geninfo_all_blocks=1 00:27:38.467 --rc geninfo_unexecuted_blocks=1 00:27:38.467 00:27:38.467 ' 00:27:38.467 11:21:58 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.467 11:21:58 -- nvmf/common.sh@7 -- # uname -s 00:27:38.467 11:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.467 11:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.467 11:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.467 11:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.467 11:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.467 11:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.467 11:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.467 11:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.467 11:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.467 11:21:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.467 11:21:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:27:38.467 11:21:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:27:38.467 11:21:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.467 11:21:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.467 11:21:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.467 11:21:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:38.467 11:21:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.467 11:21:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.467 11:21:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.467 11:21:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.467 11:21:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.467 11:21:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.467 11:21:59 -- paths/export.sh@5 -- # export PATH 00:27:38.467 11:21:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.467 11:21:59 -- nvmf/common.sh@46 -- # : 0 00:27:38.467 11:21:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:38.467 11:21:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:38.467 11:21:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:38.467 11:21:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.467 11:21:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.467 11:21:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:38.467 11:21:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:38.467 11:21:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:38.467 11:21:59 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:27:38.467 11:21:59 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:38.467 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:38.467 11:21:59 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:27:38.467 00:27:38.468 real 0m0.194s 00:27:38.468 user 0m0.113s 00:27:38.468 sys 0m0.091s 00:27:38.468 11:21:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:38.468 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:38.468 ************************************ 00:27:38.468 END TEST nvmf_discovery_remove_ifc 00:27:38.468 ************************************ 00:27:38.727 11:21:59 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:27:38.727 11:21:59 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:27:38.727 11:21:59 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:27:38.727 11:21:59 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:27:38.727 11:21:59 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:38.727 11:21:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:38.727 11:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.727 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:38.727 ************************************ 00:27:38.727 START TEST nvmf_bdevperf 00:27:38.727 ************************************ 00:27:38.727 11:21:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:38.727 * Looking for test storage... 00:27:38.727 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:38.727 11:21:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:38.727 11:21:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:38.727 11:21:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:38.727 11:21:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:38.727 11:21:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:38.727 11:21:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:38.727 11:21:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:38.727 11:21:59 -- scripts/common.sh@335 -- # IFS=.-: 00:27:38.727 11:21:59 -- scripts/common.sh@335 -- # read -ra ver1 00:27:38.727 11:21:59 -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.727 11:21:59 -- scripts/common.sh@336 -- # read -ra ver2 00:27:38.727 11:21:59 -- scripts/common.sh@337 -- # local 'op=<' 00:27:38.727 11:21:59 -- scripts/common.sh@339 -- # ver1_l=2 00:27:38.727 11:21:59 -- scripts/common.sh@340 -- # ver2_l=1 00:27:38.727 11:21:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:38.727 11:21:59 -- scripts/common.sh@343 -- # case "$op" in 00:27:38.727 11:21:59 -- scripts/common.sh@344 -- # : 1 00:27:38.727 11:21:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:38.727 11:21:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.727 11:21:59 -- scripts/common.sh@364 -- # decimal 1 00:27:38.727 11:21:59 -- scripts/common.sh@352 -- # local d=1 00:27:38.728 11:21:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.728 11:21:59 -- scripts/common.sh@354 -- # echo 1 00:27:38.728 11:21:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:38.728 11:21:59 -- scripts/common.sh@365 -- # decimal 2 00:27:38.728 11:21:59 -- scripts/common.sh@352 -- # local d=2 00:27:38.728 11:21:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.728 11:21:59 -- scripts/common.sh@354 -- # echo 2 00:27:38.728 11:21:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:38.728 11:21:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:38.728 11:21:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:38.728 11:21:59 -- scripts/common.sh@367 -- # return 0 00:27:38.728 11:21:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.728 11:21:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:38.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.728 --rc genhtml_branch_coverage=1 00:27:38.728 --rc genhtml_function_coverage=1 00:27:38.728 --rc genhtml_legend=1 00:27:38.728 --rc geninfo_all_blocks=1 00:27:38.728 --rc geninfo_unexecuted_blocks=1 00:27:38.728 00:27:38.728 ' 00:27:38.728 11:21:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:38.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.728 --rc genhtml_branch_coverage=1 00:27:38.728 --rc genhtml_function_coverage=1 00:27:38.728 --rc genhtml_legend=1 00:27:38.728 --rc geninfo_all_blocks=1 00:27:38.728 --rc geninfo_unexecuted_blocks=1 00:27:38.728 00:27:38.728 ' 00:27:38.728 11:21:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:38.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.728 --rc genhtml_branch_coverage=1 00:27:38.728 --rc genhtml_function_coverage=1 00:27:38.728 --rc genhtml_legend=1 00:27:38.728 --rc geninfo_all_blocks=1 00:27:38.728 --rc geninfo_unexecuted_blocks=1 00:27:38.728 00:27:38.728 ' 00:27:38.728 11:21:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:38.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.728 --rc genhtml_branch_coverage=1 00:27:38.728 --rc genhtml_function_coverage=1 00:27:38.728 --rc genhtml_legend=1 00:27:38.728 --rc geninfo_all_blocks=1 00:27:38.728 --rc geninfo_unexecuted_blocks=1 00:27:38.728 00:27:38.728 ' 00:27:38.728 11:21:59 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.728 11:21:59 -- nvmf/common.sh@7 -- # uname -s 00:27:38.728 11:21:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.728 11:21:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.728 11:21:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.728 11:21:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.728 11:21:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.728 11:21:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.728 11:21:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.728 11:21:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.728 11:21:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.728 11:21:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.728 11:21:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:27:38.728 11:21:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:27:38.728 11:21:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.728 11:21:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.728 11:21:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.728 11:21:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:38.728 11:21:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.728 11:21:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.728 11:21:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.728 11:21:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.728 11:21:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.728 11:21:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.728 11:21:59 -- paths/export.sh@5 -- # export PATH 00:27:38.728 11:21:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.728 11:21:59 -- nvmf/common.sh@46 -- # : 0 00:27:38.728 11:21:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:38.728 11:21:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:38.728 11:21:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:38.728 11:21:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.728 11:21:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.728 11:21:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:38.728 11:21:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:38.728 11:21:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:38.728 11:21:59 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.728 11:21:59 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.728 11:21:59 -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:38.728 11:21:59 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:38.728 11:21:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.728 11:21:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:38.728 11:21:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:38.728 11:21:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:38.728 11:21:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.728 11:21:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.728 11:21:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.728 11:21:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:38.728 11:21:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:38.728 11:21:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:38.728 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:44.003 11:22:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:44.003 11:22:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:44.003 11:22:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:44.003 11:22:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:44.003 11:22:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:44.003 11:22:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:44.003 11:22:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:44.003 11:22:04 -- nvmf/common.sh@294 -- # net_devs=() 00:27:44.003 11:22:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:44.003 11:22:04 -- nvmf/common.sh@295 -- # e810=() 00:27:44.003 11:22:04 -- nvmf/common.sh@295 -- # local -ga e810 00:27:44.003 11:22:04 -- nvmf/common.sh@296 -- # x722=() 00:27:44.003 11:22:04 -- nvmf/common.sh@296 -- # local -ga x722 00:27:44.003 11:22:04 -- nvmf/common.sh@297 -- # mlx=() 00:27:44.003 11:22:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:44.003 11:22:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.003 11:22:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:44.003 11:22:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:44.003 11:22:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:44.003 11:22:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:44.003 11:22:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:44.003 11:22:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:44.003 11:22:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:27:44.003 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:27:44.003 11:22:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:44.003 11:22:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:44.003 11:22:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:27:44.003 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:27:44.003 11:22:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:44.003 11:22:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:44.003 11:22:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:44.003 11:22:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:44.003 11:22:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.003 11:22:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:44.003 11:22:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.003 11:22:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:27:44.003 Found net devices under 0000:18:00.0: mlx_0_0 00:27:44.003 11:22:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.003 11:22:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:44.003 11:22:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.003 11:22:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:44.003 11:22:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.004 11:22:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:27:44.004 Found net devices under 0000:18:00.1: mlx_0_1 00:27:44.004 11:22:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.004 11:22:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:44.004 11:22:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:44.004 11:22:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:44.004 11:22:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:44.004 11:22:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:44.004 11:22:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:44.004 11:22:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:44.004 11:22:04 -- nvmf/common.sh@57 -- # uname 00:27:44.263 11:22:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:44.263 11:22:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:44.263 11:22:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:44.263 11:22:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:44.263 11:22:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:44.263 11:22:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:44.263 11:22:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:44.263 11:22:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:44.263 11:22:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:44.263 11:22:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:44.263 11:22:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:44.263 11:22:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:44.263 11:22:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:44.263 11:22:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:44.263 11:22:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:44.263 11:22:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:44.263 11:22:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:44.263 11:22:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:44.263 11:22:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:44.263 11:22:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:44.263 11:22:04 -- nvmf/common.sh@104 -- # continue 2 00:27:44.263 11:22:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:44.263 11:22:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:44.263 11:22:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:44.263 11:22:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:44.263 11:22:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:44.263 11:22:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:44.263 11:22:04 -- nvmf/common.sh@104 -- # continue 2 00:27:44.263 11:22:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:44.263 11:22:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:44.263 11:22:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:44.263 11:22:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:44.263 11:22:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:44.263 11:22:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:44.263 11:22:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:44.263 11:22:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:44.263 11:22:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:44.263 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:44.263 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:27:44.263 altname enp24s0f0np0 00:27:44.263 altname ens785f0np0 00:27:44.263 inet 192.168.100.8/24 scope global mlx_0_0 00:27:44.263 valid_lft forever preferred_lft forever 00:27:44.263 11:22:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:44.264 11:22:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:44.264 11:22:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:44.264 11:22:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:44.264 11:22:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:44.264 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:44.264 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:27:44.264 altname enp24s0f1np1 00:27:44.264 altname ens785f1np1 00:27:44.264 inet 192.168.100.9/24 scope global mlx_0_1 00:27:44.264 valid_lft forever preferred_lft forever 00:27:44.264 11:22:04 -- nvmf/common.sh@410 -- # return 0 00:27:44.264 11:22:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:44.264 11:22:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:44.264 11:22:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:44.264 11:22:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:44.264 11:22:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:44.264 11:22:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:44.264 11:22:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:44.264 11:22:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:44.264 11:22:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:44.264 11:22:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:44.264 11:22:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:44.264 11:22:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:44.264 11:22:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:44.264 11:22:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:44.264 11:22:04 -- nvmf/common.sh@104 -- # continue 2 00:27:44.264 11:22:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:44.264 11:22:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:44.264 11:22:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:44.264 11:22:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:44.264 11:22:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:44.264 11:22:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@104 -- # continue 2 00:27:44.264 11:22:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:44.264 11:22:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:44.264 11:22:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:44.264 11:22:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:44.264 11:22:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:44.264 11:22:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:44.264 11:22:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:44.264 192.168.100.9' 00:27:44.264 11:22:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:44.264 192.168.100.9' 00:27:44.264 11:22:04 -- nvmf/common.sh@445 -- # head -n 1 00:27:44.264 11:22:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:44.264 11:22:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:44.264 192.168.100.9' 00:27:44.264 11:22:04 -- nvmf/common.sh@446 -- # tail -n +2 00:27:44.264 11:22:04 -- nvmf/common.sh@446 -- # head -n 1 00:27:44.264 11:22:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:44.264 11:22:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:44.264 11:22:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:44.264 11:22:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:44.264 11:22:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:44.264 11:22:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:44.264 11:22:04 -- host/bdevperf.sh@25 -- # tgt_init 00:27:44.264 11:22:04 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:44.264 11:22:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:44.264 11:22:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.264 11:22:04 -- common/autotest_common.sh@10 -- # set +x 00:27:44.264 11:22:04 -- nvmf/common.sh@469 -- # nvmfpid=1774274 00:27:44.264 11:22:04 -- nvmf/common.sh@470 -- # waitforlisten 1774274 00:27:44.264 11:22:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:44.264 11:22:04 -- common/autotest_common.sh@829 -- # '[' -z 1774274 ']' 00:27:44.264 11:22:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.264 11:22:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.264 11:22:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.264 11:22:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.264 11:22:04 -- common/autotest_common.sh@10 -- # set +x 00:27:44.264 [2024-12-13 11:22:04.816952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.264 [2024-12-13 11:22:04.816992] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.523 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.524 [2024-12-13 11:22:04.872385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.524 [2024-12-13 11:22:04.941463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:44.524 [2024-12-13 11:22:04.941570] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.524 [2024-12-13 11:22:04.941578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.524 [2024-12-13 11:22:04.941584] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.524 [2024-12-13 11:22:04.941622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.524 [2024-12-13 11:22:04.941690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.524 [2024-12-13 11:22:04.941692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.090 11:22:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.090 11:22:05 -- common/autotest_common.sh@862 -- # return 0 00:27:45.090 11:22:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:45.090 11:22:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.090 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:27:45.090 11:22:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.090 11:22:05 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:45.090 11:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.090 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:27:45.350 [2024-12-13 11:22:05.669190] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d5d140/0x1d61630) succeed. 00:27:45.350 [2024-12-13 11:22:05.677192] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d5e690/0x1da2cd0) succeed. 00:27:45.350 11:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.350 11:22:05 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:45.350 11:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.350 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:27:45.350 Malloc0 00:27:45.350 11:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.350 11:22:05 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:45.350 11:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.350 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:27:45.350 11:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.350 11:22:05 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:45.350 11:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.350 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:27:45.350 11:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.350 11:22:05 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:45.350 11:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.350 11:22:05 -- common/autotest_common.sh@10 -- # set +x 00:27:45.350 [2024-12-13 11:22:05.810787] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:45.350 11:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.350 11:22:05 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:45.350 11:22:05 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:45.350 11:22:05 -- nvmf/common.sh@520 -- # config=() 00:27:45.350 11:22:05 -- nvmf/common.sh@520 -- # local subsystem config 00:27:45.350 11:22:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:45.350 11:22:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:45.350 { 00:27:45.350 "params": { 00:27:45.350 "name": "Nvme$subsystem", 00:27:45.350 "trtype": "$TEST_TRANSPORT", 00:27:45.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.350 "adrfam": "ipv4", 00:27:45.350 "trsvcid": "$NVMF_PORT", 00:27:45.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.350 "hdgst": ${hdgst:-false}, 00:27:45.350 "ddgst": ${ddgst:-false} 00:27:45.350 }, 00:27:45.350 "method": "bdev_nvme_attach_controller" 00:27:45.350 } 00:27:45.350 EOF 00:27:45.350 )") 00:27:45.350 11:22:05 -- nvmf/common.sh@542 -- # cat 00:27:45.350 11:22:05 -- nvmf/common.sh@544 -- # jq . 00:27:45.350 11:22:05 -- nvmf/common.sh@545 -- # IFS=, 00:27:45.350 11:22:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:45.350 "params": { 00:27:45.350 "name": "Nvme1", 00:27:45.350 "trtype": "rdma", 00:27:45.350 "traddr": "192.168.100.8", 00:27:45.350 "adrfam": "ipv4", 00:27:45.350 "trsvcid": "4420", 00:27:45.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:45.350 "hdgst": false, 00:27:45.350 "ddgst": false 00:27:45.350 }, 00:27:45.350 "method": "bdev_nvme_attach_controller" 00:27:45.350 }' 00:27:45.350 [2024-12-13 11:22:05.857765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:45.350 [2024-12-13 11:22:05.857813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774450 ] 00:27:45.350 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.350 [2024-12-13 11:22:05.909316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.637 [2024-12-13 11:22:05.979483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.637 Running I/O for 1 seconds... 00:27:46.597 00:27:46.597 Latency(us) 00:27:46.597 [2024-12-13T10:22:07.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.597 [2024-12-13T10:22:07.166Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:46.597 Verification LBA range: start 0x0 length 0x4000 00:27:46.597 Nvme1n1 : 1.00 27297.59 106.63 0.00 0.00 4668.01 1177.22 11893.57 00:27:46.597 [2024-12-13T10:22:07.166Z] =================================================================================================================== 00:27:46.597 [2024-12-13T10:22:07.166Z] Total : 27297.59 106.63 0.00 0.00 4668.01 1177.22 11893.57 00:27:46.856 11:22:07 -- host/bdevperf.sh@30 -- # bdevperfpid=1774759 00:27:46.856 11:22:07 -- host/bdevperf.sh@32 -- # sleep 3 00:27:46.856 11:22:07 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:46.856 11:22:07 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:46.856 11:22:07 -- nvmf/common.sh@520 -- # config=() 00:27:46.856 11:22:07 -- nvmf/common.sh@520 -- # local subsystem config 00:27:46.856 11:22:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:46.856 11:22:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:46.856 { 00:27:46.856 "params": { 00:27:46.856 "name": "Nvme$subsystem", 00:27:46.856 "trtype": "$TEST_TRANSPORT", 00:27:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.856 "adrfam": "ipv4", 00:27:46.856 "trsvcid": "$NVMF_PORT", 00:27:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.856 "hdgst": ${hdgst:-false}, 00:27:46.856 "ddgst": ${ddgst:-false} 00:27:46.856 }, 00:27:46.856 "method": "bdev_nvme_attach_controller" 00:27:46.856 } 00:27:46.856 EOF 00:27:46.856 )") 00:27:46.856 11:22:07 -- nvmf/common.sh@542 -- # cat 00:27:46.856 11:22:07 -- nvmf/common.sh@544 -- # jq . 00:27:46.856 11:22:07 -- nvmf/common.sh@545 -- # IFS=, 00:27:46.856 11:22:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:46.856 "params": { 00:27:46.856 "name": "Nvme1", 00:27:46.856 "trtype": "rdma", 00:27:46.856 "traddr": "192.168.100.8", 00:27:46.856 "adrfam": "ipv4", 00:27:46.856 "trsvcid": "4420", 00:27:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.856 "hdgst": false, 00:27:46.856 "ddgst": false 00:27:46.856 }, 00:27:46.856 "method": "bdev_nvme_attach_controller" 00:27:46.856 }' 00:27:46.856 [2024-12-13 11:22:07.420982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:46.856 [2024-12-13 11:22:07.421029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774759 ] 00:27:47.115 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.115 [2024-12-13 11:22:07.471717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.115 [2024-12-13 11:22:07.537308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.374 Running I/O for 15 seconds... 00:27:49.907 11:22:10 -- host/bdevperf.sh@33 -- # kill -9 1774274 00:27:49.907 11:22:10 -- host/bdevperf.sh@35 -- # sleep 3 00:27:51.294 [2024-12-13 11:22:11.416469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x182900 00:27:51.294 [2024-12-13 11:22:11.416502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.294 [2024-12-13 11:22:11.416524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.294 [2024-12-13 11:22:11.416537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x182900 00:27:51.294 [2024-12-13 11:22:11.416551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x182900 00:27:51.294 [2024-12-13 11:22:11.416569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.294 [2024-12-13 11:22:11.416581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180800 00:27:51.294 [2024-12-13 11:22:11.416594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.294 [2024-12-13 11:22:11.416608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.294 [2024-12-13 11:22:11.416620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x180800 00:27:51.294 [2024-12-13 11:22:11.416633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.294 [2024-12-13 11:22:11.416640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.416956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.416968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.416987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.416993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.417007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.417021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x182900 00:27:51.295 [2024-12-13 11:22:11.417036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.417050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.417063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.417076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.295 [2024-12-13 11:22:11.417089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.417103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.295 [2024-12-13 11:22:11.417110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180800 00:27:51.295 [2024-12-13 11:22:11.417115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x182900 00:27:51.296 [2024-12-13 11:22:11.417537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.296 [2024-12-13 11:22:11.417574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.296 [2024-12-13 11:22:11.417580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x180800 00:27:51.296 [2024-12-13 11:22:11.417586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.417795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.417838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.417888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x180800 00:27:51.297 [2024-12-13 11:22:11.417952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.417963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.417976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.417990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.417997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.418002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.418009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.418014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.418021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.418026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.418033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.418038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.418046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x182900 00:27:51.297 [2024-12-13 11:22:11.418051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.418058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.297 [2024-12-13 11:22:11.418064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.297 [2024-12-13 11:22:11.426858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x182900 00:27:51.298 [2024-12-13 11:22:11.426917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.426960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.298 [2024-12-13 11:22:11.426985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.298 [2024-12-13 11:22:11.427035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.427063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180800 00:27:51.298 [2024-12-13 11:22:11.427085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.427114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.298 [2024-12-13 11:22:11.427136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.427163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.298 [2024-12-13 11:22:11.427184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.427212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x182900 00:27:51.298 [2024-12-13 11:22:11.427235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:77b8a000 sqhd:5310 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.429400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:51.298 [2024-12-13 11:22:11.429434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:51.298 [2024-12-13 11:22:11.429456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61160 len:8 PRP1 0x0 PRP2 0x0 00:27:51.298 [2024-12-13 11:22:11.429479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.429553] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:27:51.298 [2024-12-13 11:22:11.429625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.298 [2024-12-13 11:22:11.429653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.429677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.298 [2024-12-13 11:22:11.429700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.429723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.298 [2024-12-13 11:22:11.429745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.429768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.298 [2024-12-13 11:22:11.429791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.298 [2024-12-13 11:22:11.446290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:51.298 [2024-12-13 11:22:11.446306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:51.298 [2024-12-13 11:22:11.446313] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:51.298 [2024-12-13 11:22:11.448022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:51.298 [2024-12-13 11:22:11.450044] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:51.298 [2024-12-13 11:22:11.450061] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:51.298 [2024-12-13 11:22:11.450073] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:27:52.020 [2024-12-13 11:22:12.453975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:52.020 [2024-12-13 11:22:12.454025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.020 [2024-12-13 11:22:12.454419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.020 [2024-12-13 11:22:12.454449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.020 [2024-12-13 11:22:12.454473] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:52.020 [2024-12-13 11:22:12.454900] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.020 [2024-12-13 11:22:12.456168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.020 [2024-12-13 11:22:12.466517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.020 [2024-12-13 11:22:12.468694] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:52.020 [2024-12-13 11:22:12.468711] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:52.020 [2024-12-13 11:22:12.468716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:27:52.958 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1774274 Killed "${NVMF_APP[@]}" "$@" 00:27:52.958 11:22:13 -- host/bdevperf.sh@36 -- # tgt_init 00:27:52.958 11:22:13 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:52.958 11:22:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:52.958 11:22:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.958 11:22:13 -- common/autotest_common.sh@10 -- # set +x 00:27:52.958 11:22:13 -- nvmf/common.sh@469 -- # nvmfpid=1775826 00:27:52.958 11:22:13 -- nvmf/common.sh@470 -- # waitforlisten 1775826 00:27:52.958 11:22:13 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:52.958 11:22:13 -- common/autotest_common.sh@829 -- # '[' -z 1775826 ']' 00:27:52.958 11:22:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.958 11:22:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.958 11:22:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.958 11:22:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.958 11:22:13 -- common/autotest_common.sh@10 -- # set +x 00:27:52.958 [2024-12-13 11:22:13.441364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:52.958 [2024-12-13 11:22:13.441405] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.958 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.958 [2024-12-13 11:22:13.472528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:52.958 [2024-12-13 11:22:13.472549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.958 [2024-12-13 11:22:13.472656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.958 [2024-12-13 11:22:13.472665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.958 [2024-12-13 11:22:13.472674] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:52.958 [2024-12-13 11:22:13.474176] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.958 [2024-12-13 11:22:13.474358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.958 [2024-12-13 11:22:13.486052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.958 [2024-12-13 11:22:13.488032] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:52.958 [2024-12-13 11:22:13.488050] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:52.958 [2024-12-13 11:22:13.488056] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:27:52.958 [2024-12-13 11:22:13.492802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.217 [2024-12-13 11:22:13.564625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:53.217 [2024-12-13 11:22:13.564725] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.217 [2024-12-13 11:22:13.564733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.217 [2024-12-13 11:22:13.564738] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.217 [2024-12-13 11:22:13.564778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.217 [2024-12-13 11:22:13.564860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.217 [2024-12-13 11:22:13.564862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.784 11:22:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.784 11:22:14 -- common/autotest_common.sh@862 -- # return 0 00:27:53.784 11:22:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:53.784 11:22:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.784 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:27:53.784 11:22:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.784 11:22:14 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:53.784 11:22:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.784 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:27:53.784 [2024-12-13 11:22:14.306324] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe6a140/0xe6e630) succeed. 00:27:53.784 [2024-12-13 11:22:14.314436] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe6b690/0xeafcd0) succeed. 00:27:54.043 11:22:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.043 11:22:14 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:54.043 11:22:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.043 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:27:54.043 Malloc0 00:27:54.043 11:22:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.043 11:22:14 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.043 11:22:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.043 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:27:54.043 11:22:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.043 11:22:14 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:54.043 11:22:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.043 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:27:54.043 11:22:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.043 11:22:14 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:54.043 11:22:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.043 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:27:54.043 [2024-12-13 11:22:14.445332] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:54.043 11:22:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.043 11:22:14 -- host/bdevperf.sh@38 -- # wait 1774759 00:27:54.043 [2024-12-13 11:22:14.491927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:54.043 [2024-12-13 11:22:14.491949] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.043 [2024-12-13 11:22:14.492044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:54.043 [2024-12-13 11:22:14.492053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:54.043 [2024-12-13 11:22:14.492060] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:54.043 [2024-12-13 11:22:14.493625] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:54.043 [2024-12-13 11:22:14.493731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:54.043 [2024-12-13 11:22:14.505406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:54.043 [2024-12-13 11:22:14.540564] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:04.023 00:28:04.023 Latency(us) 00:28:04.023 [2024-12-13T10:22:24.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.023 [2024-12-13T10:22:24.592Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:04.023 Verification LBA range: start 0x0 length 0x4000 00:28:04.023 Nvme1n1 : 15.00 19685.70 76.90 17741.66 0.00 3409.86 362.57 1062557.01 00:28:04.023 [2024-12-13T10:22:24.592Z] =================================================================================================================== 00:28:04.023 [2024-12-13T10:22:24.592Z] Total : 19685.70 76.90 17741.66 0.00 3409.86 362.57 1062557.01 00:28:04.023 11:22:22 -- host/bdevperf.sh@39 -- # sync 00:28:04.023 11:22:22 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.023 11:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.023 11:22:22 -- common/autotest_common.sh@10 -- # set +x 00:28:04.023 11:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.023 11:22:22 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:04.023 11:22:22 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:04.023 11:22:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:04.023 11:22:22 -- nvmf/common.sh@116 -- # sync 00:28:04.023 11:22:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:04.023 11:22:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:04.023 11:22:22 -- nvmf/common.sh@119 -- # set +e 00:28:04.023 11:22:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:04.023 11:22:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:04.023 rmmod nvme_rdma 00:28:04.023 rmmod nvme_fabrics 00:28:04.023 11:22:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:04.023 11:22:23 -- nvmf/common.sh@123 -- # set -e 00:28:04.023 11:22:23 -- nvmf/common.sh@124 -- # return 0 00:28:04.023 11:22:23 -- nvmf/common.sh@477 -- # '[' -n 1775826 ']' 00:28:04.023 11:22:23 -- nvmf/common.sh@478 -- # killprocess 1775826 00:28:04.023 11:22:23 -- common/autotest_common.sh@936 -- # '[' -z 1775826 ']' 00:28:04.023 11:22:23 -- common/autotest_common.sh@940 -- # kill -0 1775826 00:28:04.023 11:22:23 -- common/autotest_common.sh@941 -- # uname 00:28:04.023 11:22:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:04.023 11:22:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1775826 00:28:04.023 11:22:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:04.023 11:22:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:04.023 11:22:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1775826' 00:28:04.023 killing process with pid 1775826 00:28:04.023 11:22:23 -- common/autotest_common.sh@955 -- # kill 1775826 00:28:04.023 11:22:23 -- common/autotest_common.sh@960 -- # wait 1775826 00:28:04.023 11:22:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:04.023 11:22:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:04.023 00:28:04.023 real 0m24.287s 00:28:04.023 user 1m4.284s 00:28:04.023 sys 0m5.112s 00:28:04.023 11:22:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:04.023 11:22:23 -- common/autotest_common.sh@10 -- # set +x 00:28:04.023 ************************************ 00:28:04.023 END TEST nvmf_bdevperf 00:28:04.023 ************************************ 00:28:04.023 11:22:23 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:04.023 11:22:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:04.023 11:22:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:04.023 11:22:23 -- common/autotest_common.sh@10 -- # set +x 00:28:04.023 ************************************ 00:28:04.023 START TEST nvmf_target_disconnect 00:28:04.023 ************************************ 00:28:04.023 11:22:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:04.023 * Looking for test storage... 00:28:04.023 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:04.023 11:22:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:04.023 11:22:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:04.023 11:22:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:04.023 11:22:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:04.023 11:22:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:04.023 11:22:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:04.023 11:22:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:04.023 11:22:23 -- scripts/common.sh@335 -- # IFS=.-: 00:28:04.023 11:22:23 -- scripts/common.sh@335 -- # read -ra ver1 00:28:04.023 11:22:23 -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.023 11:22:23 -- scripts/common.sh@336 -- # read -ra ver2 00:28:04.023 11:22:23 -- scripts/common.sh@337 -- # local 'op=<' 00:28:04.023 11:22:23 -- scripts/common.sh@339 -- # ver1_l=2 00:28:04.023 11:22:23 -- scripts/common.sh@340 -- # ver2_l=1 00:28:04.023 11:22:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:04.023 11:22:23 -- scripts/common.sh@343 -- # case "$op" in 00:28:04.023 11:22:23 -- scripts/common.sh@344 -- # : 1 00:28:04.023 11:22:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:04.023 11:22:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.023 11:22:23 -- scripts/common.sh@364 -- # decimal 1 00:28:04.023 11:22:23 -- scripts/common.sh@352 -- # local d=1 00:28:04.023 11:22:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.023 11:22:23 -- scripts/common.sh@354 -- # echo 1 00:28:04.023 11:22:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:04.023 11:22:23 -- scripts/common.sh@365 -- # decimal 2 00:28:04.023 11:22:23 -- scripts/common.sh@352 -- # local d=2 00:28:04.023 11:22:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.023 11:22:23 -- scripts/common.sh@354 -- # echo 2 00:28:04.023 11:22:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:04.023 11:22:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:04.023 11:22:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:04.023 11:22:23 -- scripts/common.sh@367 -- # return 0 00:28:04.024 11:22:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.024 11:22:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.024 --rc genhtml_branch_coverage=1 00:28:04.024 --rc genhtml_function_coverage=1 00:28:04.024 --rc genhtml_legend=1 00:28:04.024 --rc geninfo_all_blocks=1 00:28:04.024 --rc geninfo_unexecuted_blocks=1 00:28:04.024 00:28:04.024 ' 00:28:04.024 11:22:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.024 --rc genhtml_branch_coverage=1 00:28:04.024 --rc genhtml_function_coverage=1 00:28:04.024 --rc genhtml_legend=1 00:28:04.024 --rc geninfo_all_blocks=1 00:28:04.024 --rc geninfo_unexecuted_blocks=1 00:28:04.024 00:28:04.024 ' 00:28:04.024 11:22:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.024 --rc genhtml_branch_coverage=1 00:28:04.024 --rc genhtml_function_coverage=1 00:28:04.024 --rc genhtml_legend=1 00:28:04.024 --rc geninfo_all_blocks=1 00:28:04.024 --rc geninfo_unexecuted_blocks=1 00:28:04.024 00:28:04.024 ' 00:28:04.024 11:22:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.024 --rc genhtml_branch_coverage=1 00:28:04.024 --rc genhtml_function_coverage=1 00:28:04.024 --rc genhtml_legend=1 00:28:04.024 --rc geninfo_all_blocks=1 00:28:04.024 --rc geninfo_unexecuted_blocks=1 00:28:04.024 00:28:04.024 ' 00:28:04.024 11:22:23 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.024 11:22:23 -- nvmf/common.sh@7 -- # uname -s 00:28:04.024 11:22:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.024 11:22:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.024 11:22:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.024 11:22:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.024 11:22:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.024 11:22:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.024 11:22:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.024 11:22:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.024 11:22:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.024 11:22:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.024 11:22:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:28:04.024 11:22:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:28:04.024 11:22:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.024 11:22:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.024 11:22:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.024 11:22:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:04.024 11:22:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.024 11:22:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.024 11:22:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.024 11:22:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.024 11:22:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.024 11:22:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.024 11:22:23 -- paths/export.sh@5 -- # export PATH 00:28:04.024 11:22:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.024 11:22:23 -- nvmf/common.sh@46 -- # : 0 00:28:04.024 11:22:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:04.024 11:22:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:04.024 11:22:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:04.024 11:22:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.024 11:22:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.024 11:22:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:04.024 11:22:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:04.024 11:22:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:04.024 11:22:23 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:04.024 11:22:23 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:04.024 11:22:23 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:04.024 11:22:23 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:04.024 11:22:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:04.024 11:22:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.024 11:22:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:04.024 11:22:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:04.024 11:22:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:04.024 11:22:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.024 11:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.024 11:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.024 11:22:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:04.024 11:22:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:04.024 11:22:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:04.024 11:22:23 -- common/autotest_common.sh@10 -- # set +x 00:28:09.297 11:22:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:09.297 11:22:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:09.297 11:22:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:09.297 11:22:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:09.297 11:22:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:09.297 11:22:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:09.297 11:22:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:09.297 11:22:29 -- nvmf/common.sh@294 -- # net_devs=() 00:28:09.297 11:22:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:09.297 11:22:29 -- nvmf/common.sh@295 -- # e810=() 00:28:09.297 11:22:29 -- nvmf/common.sh@295 -- # local -ga e810 00:28:09.297 11:22:29 -- nvmf/common.sh@296 -- # x722=() 00:28:09.297 11:22:29 -- nvmf/common.sh@296 -- # local -ga x722 00:28:09.297 11:22:29 -- nvmf/common.sh@297 -- # mlx=() 00:28:09.297 11:22:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:09.297 11:22:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.297 11:22:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:09.297 11:22:29 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:09.297 11:22:29 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:09.297 11:22:29 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:09.297 11:22:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:09.297 11:22:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:09.297 11:22:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:28:09.297 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:28:09.297 11:22:29 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:09.297 11:22:29 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:09.297 11:22:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:28:09.298 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:28:09.298 11:22:29 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:09.298 11:22:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:09.298 11:22:29 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.298 11:22:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:09.298 11:22:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.298 11:22:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:28:09.298 Found net devices under 0000:18:00.0: mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.298 11:22:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.298 11:22:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:09.298 11:22:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.298 11:22:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:28:09.298 Found net devices under 0000:18:00.1: mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.298 11:22:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:09.298 11:22:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:09.298 11:22:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:09.298 11:22:29 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:09.298 11:22:29 -- nvmf/common.sh@57 -- # uname 00:28:09.298 11:22:29 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:09.298 11:22:29 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:09.298 11:22:29 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:09.298 11:22:29 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:09.298 11:22:29 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:09.298 11:22:29 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:09.298 11:22:29 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:09.298 11:22:29 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:09.298 11:22:29 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:09.298 11:22:29 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:09.298 11:22:29 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:09.298 11:22:29 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:09.298 11:22:29 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:09.298 11:22:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:09.298 11:22:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:09.298 11:22:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:09.298 11:22:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@104 -- # continue 2 00:28:09.298 11:22:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@104 -- # continue 2 00:28:09.298 11:22:29 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:09.298 11:22:29 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:09.298 11:22:29 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:09.298 11:22:29 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:09.298 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:09.298 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:28:09.298 altname enp24s0f0np0 00:28:09.298 altname ens785f0np0 00:28:09.298 inet 192.168.100.8/24 scope global mlx_0_0 00:28:09.298 valid_lft forever preferred_lft forever 00:28:09.298 11:22:29 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:09.298 11:22:29 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:09.298 11:22:29 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:09.298 11:22:29 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:09.298 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:09.298 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:28:09.298 altname enp24s0f1np1 00:28:09.298 altname ens785f1np1 00:28:09.298 inet 192.168.100.9/24 scope global mlx_0_1 00:28:09.298 valid_lft forever preferred_lft forever 00:28:09.298 11:22:29 -- nvmf/common.sh@410 -- # return 0 00:28:09.298 11:22:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:09.298 11:22:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:09.298 11:22:29 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:09.298 11:22:29 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:09.298 11:22:29 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:09.298 11:22:29 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:09.298 11:22:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:09.298 11:22:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:09.298 11:22:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:09.298 11:22:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@104 -- # continue 2 00:28:09.298 11:22:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:09.298 11:22:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:09.298 11:22:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@104 -- # continue 2 00:28:09.298 11:22:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:09.298 11:22:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:09.298 11:22:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:09.298 11:22:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:09.298 11:22:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:09.298 11:22:29 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:09.298 192.168.100.9' 00:28:09.298 11:22:29 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:09.298 192.168.100.9' 00:28:09.298 11:22:29 -- nvmf/common.sh@445 -- # head -n 1 00:28:09.298 11:22:29 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:09.298 11:22:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:09.298 192.168.100.9' 00:28:09.298 11:22:29 -- nvmf/common.sh@446 -- # tail -n +2 00:28:09.298 11:22:29 -- nvmf/common.sh@446 -- # head -n 1 00:28:09.298 11:22:29 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:09.298 11:22:29 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:09.298 11:22:29 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:09.298 11:22:29 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:09.298 11:22:29 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:09.298 11:22:29 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:09.298 11:22:29 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:09.298 11:22:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:09.298 11:22:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:09.298 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:28:09.298 ************************************ 00:28:09.298 START TEST nvmf_target_disconnect_tc1 00:28:09.298 ************************************ 00:28:09.298 11:22:29 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:28:09.298 11:22:29 -- host/target_disconnect.sh@32 -- # set +e 00:28:09.298 11:22:29 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:09.298 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.298 [2024-12-13 11:22:29.318225] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:09.298 [2024-12-13 11:22:29.318264] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:09.299 [2024-12-13 11:22:29.318274] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:28:09.867 [2024-12-13 11:22:30.322238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.867 [2024-12-13 11:22:30.322307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:09.867 [2024-12-13 11:22:30.322333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:28:09.867 [2024-12-13 11:22:30.322388] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:09.867 [2024-12-13 11:22:30.322396] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:09.867 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:28:09.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:09.867 Initializing NVMe Controllers 00:28:09.867 11:22:30 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:09.867 11:22:30 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:09.867 11:22:30 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:28:09.867 11:22:30 -- common/autotest_common.sh@1142 -- # return 0 00:28:09.867 11:22:30 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:09.867 11:22:30 -- host/target_disconnect.sh@41 -- # set -e 00:28:09.867 00:28:09.867 real 0m1.107s 00:28:09.867 user 0m0.914s 00:28:09.867 sys 0m0.180s 00:28:09.867 11:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:09.867 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:28:09.867 ************************************ 00:28:09.867 END TEST nvmf_target_disconnect_tc1 00:28:09.867 ************************************ 00:28:09.867 11:22:30 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:09.867 11:22:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:09.867 11:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:09.867 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:28:09.867 ************************************ 00:28:09.867 START TEST nvmf_target_disconnect_tc2 00:28:09.867 ************************************ 00:28:09.867 11:22:30 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:28:09.867 11:22:30 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:28:09.867 11:22:30 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:09.867 11:22:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:09.867 11:22:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.867 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:28:09.867 11:22:30 -- nvmf/common.sh@469 -- # nvmfpid=1781067 00:28:09.867 11:22:30 -- nvmf/common.sh@470 -- # waitforlisten 1781067 00:28:09.867 11:22:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:09.867 11:22:30 -- common/autotest_common.sh@829 -- # '[' -z 1781067 ']' 00:28:09.867 11:22:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.867 11:22:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.867 11:22:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.867 11:22:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.867 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:28:09.867 [2024-12-13 11:22:30.424882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:09.867 [2024-12-13 11:22:30.424929] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.127 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.127 [2024-12-13 11:22:30.493175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.127 [2024-12-13 11:22:30.557264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:10.127 [2024-12-13 11:22:30.557380] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.127 [2024-12-13 11:22:30.557387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.127 [2024-12-13 11:22:30.557393] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.127 [2024-12-13 11:22:30.557509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:10.127 [2024-12-13 11:22:30.557617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:10.127 [2024-12-13 11:22:30.557725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:10.127 [2024-12-13 11:22:30.557727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:10.695 11:22:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.695 11:22:31 -- common/autotest_common.sh@862 -- # return 0 00:28:10.695 11:22:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:10.695 11:22:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:10.695 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.695 11:22:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.695 11:22:31 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:10.695 11:22:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.695 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 Malloc0 00:28:10.955 11:22:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.955 11:22:31 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:10.955 11:22:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.955 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 [2024-12-13 11:22:31.294674] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x767d60/0x773700) succeed. 00:28:10.955 [2024-12-13 11:22:31.302987] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x769350/0x7f3740) succeed. 00:28:10.955 11:22:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.955 11:22:31 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:10.955 11:22:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.955 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 11:22:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.955 11:22:31 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:10.955 11:22:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.955 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 11:22:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.955 11:22:31 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:10.955 11:22:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.955 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 [2024-12-13 11:22:31.439098] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:10.955 11:22:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.955 11:22:31 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:10.955 11:22:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.955 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 11:22:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.955 11:22:31 -- host/target_disconnect.sh@50 -- # reconnectpid=1781184 00:28:10.955 11:22:31 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:10.955 11:22:31 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:10.955 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.492 11:22:33 -- host/target_disconnect.sh@53 -- # kill -9 1781067 00:28:13.492 11:22:33 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Read completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 Write completed with error (sct=0, sc=8) 00:28:14.060 starting I/O failed 00:28:14.060 [2024-12-13 11:22:34.608754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:14.998 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1781067 Killed "${NVMF_APP[@]}" "$@" 00:28:14.998 11:22:35 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:28:14.998 11:22:35 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:14.998 11:22:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:14.998 11:22:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:14.998 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.998 11:22:35 -- nvmf/common.sh@469 -- # nvmfpid=1781903 00:28:14.998 11:22:35 -- nvmf/common.sh@470 -- # waitforlisten 1781903 00:28:14.998 11:22:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:14.998 11:22:35 -- common/autotest_common.sh@829 -- # '[' -z 1781903 ']' 00:28:14.998 11:22:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.998 11:22:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.998 11:22:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.998 11:22:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.998 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.998 [2024-12-13 11:22:35.512090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:14.998 [2024-12-13 11:22:35.512136] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.998 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.258 [2024-12-13 11:22:35.577598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Write completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 Read completed with error (sct=0, sc=8) 00:28:15.258 starting I/O failed 00:28:15.258 [2024-12-13 11:22:35.613730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.258 [2024-12-13 11:22:35.615317] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:15.258 [2024-12-13 11:22:35.615335] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:15.258 [2024-12-13 11:22:35.615342] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:15.258 [2024-12-13 11:22:35.647721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:15.258 [2024-12-13 11:22:35.647813] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.258 [2024-12-13 11:22:35.647821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.258 [2024-12-13 11:22:35.647827] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.258 [2024-12-13 11:22:35.647935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:15.258 [2024-12-13 11:22:35.648042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:15.258 [2024-12-13 11:22:35.648147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:15.258 [2024-12-13 11:22:35.648148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:15.826 11:22:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.826 11:22:36 -- common/autotest_common.sh@862 -- # return 0 00:28:15.826 11:22:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:15.826 11:22:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.826 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:15.826 11:22:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.826 11:22:36 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:15.826 11:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.826 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:15.826 Malloc0 00:28:15.826 11:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.826 11:22:36 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:15.826 11:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.826 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.086 [2024-12-13 11:22:36.402113] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7a2d60/0x7ae700) succeed. 00:28:16.086 [2024-12-13 11:22:36.410698] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7a4350/0x82e740) succeed. 00:28:16.086 11:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.086 11:22:36 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.086 11:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.086 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.086 11:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.086 11:22:36 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.086 11:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.086 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.086 11:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.086 11:22:36 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:16.086 11:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.086 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.086 [2024-12-13 11:22:36.543346] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:16.086 11:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.086 11:22:36 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:16.086 11:22:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.086 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.086 11:22:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.086 11:22:36 -- host/target_disconnect.sh@58 -- # wait 1781184 00:28:16.086 [2024-12-13 11:22:36.619253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.086 qpair failed and we were unable to recover it. 00:28:16.086 [2024-12-13 11:22:36.631295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.086 [2024-12-13 11:22:36.631340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.086 [2024-12-13 11:22:36.631356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.086 [2024-12-13 11:22:36.631364] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.086 [2024-12-13 11:22:36.631381] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.086 [2024-12-13 11:22:36.641544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.086 qpair failed and we were unable to recover it. 00:28:16.086 [2024-12-13 11:22:36.651374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.086 [2024-12-13 11:22:36.651408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.086 [2024-12-13 11:22:36.651426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.086 [2024-12-13 11:22:36.651434] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.086 [2024-12-13 11:22:36.651440] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.346 [2024-12-13 11:22:36.661609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.346 qpair failed and we were unable to recover it. 00:28:16.346 [2024-12-13 11:22:36.671382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.346 [2024-12-13 11:22:36.671420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.346 [2024-12-13 11:22:36.671435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.346 [2024-12-13 11:22:36.671442] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.346 [2024-12-13 11:22:36.671448] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.346 [2024-12-13 11:22:36.681751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.346 qpair failed and we were unable to recover it. 00:28:16.346 [2024-12-13 11:22:36.691408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.346 [2024-12-13 11:22:36.691449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.346 [2024-12-13 11:22:36.691464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.346 [2024-12-13 11:22:36.691471] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.346 [2024-12-13 11:22:36.691476] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.346 [2024-12-13 11:22:36.701743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.346 qpair failed and we were unable to recover it. 00:28:16.346 [2024-12-13 11:22:36.711532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.346 [2024-12-13 11:22:36.711574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.711588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.711595] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.711601] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.721911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.731570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.731611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.731625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.731632] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.731643] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.741796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.751701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.751735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.751749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.751756] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.751762] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.761900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.771646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.771683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.771697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.771704] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.771710] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.781904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.791717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.791757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.791770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.791777] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.791783] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.801965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.811641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.811678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.811692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.811699] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.811705] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.822026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.831803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.831843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.831857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.831864] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.831870] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.842389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.851946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.851986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.852000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.852007] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.852013] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.862262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.871852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.871890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.871904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.871911] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.871917] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.882220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.892019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.892052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.892066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.892072] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.892078] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.347 [2024-12-13 11:22:36.902452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.347 qpair failed and we were unable to recover it. 00:28:16.347 [2024-12-13 11:22:36.912182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.347 [2024-12-13 11:22:36.912224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.347 [2024-12-13 11:22:36.912237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.347 [2024-12-13 11:22:36.912247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.347 [2024-12-13 11:22:36.912253] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.607 [2024-12-13 11:22:36.922293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.607 qpair failed and we were unable to recover it. 00:28:16.607 [2024-12-13 11:22:36.932102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.607 [2024-12-13 11:22:36.932144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.607 [2024-12-13 11:22:36.932157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.607 [2024-12-13 11:22:36.932164] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.607 [2024-12-13 11:22:36.932170] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.607 [2024-12-13 11:22:36.942414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.607 qpair failed and we were unable to recover it. 00:28:16.607 [2024-12-13 11:22:36.952107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.607 [2024-12-13 11:22:36.952146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.607 [2024-12-13 11:22:36.952161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.607 [2024-12-13 11:22:36.952168] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.607 [2024-12-13 11:22:36.952173] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.607 [2024-12-13 11:22:36.962527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.607 qpair failed and we were unable to recover it. 00:28:16.607 [2024-12-13 11:22:36.972164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.607 [2024-12-13 11:22:36.972201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.607 [2024-12-13 11:22:36.972214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.607 [2024-12-13 11:22:36.972221] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.607 [2024-12-13 11:22:36.972227] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.607 [2024-12-13 11:22:36.982470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.607 qpair failed and we were unable to recover it. 00:28:16.607 [2024-12-13 11:22:36.992303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.607 [2024-12-13 11:22:36.992337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.607 [2024-12-13 11:22:36.992351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.607 [2024-12-13 11:22:36.992358] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.607 [2024-12-13 11:22:36.992364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.607 [2024-12-13 11:22:37.002706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.607 qpair failed and we were unable to recover it. 00:28:16.607 [2024-12-13 11:22:37.012349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.607 [2024-12-13 11:22:37.012389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.607 [2024-12-13 11:22:37.012403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.607 [2024-12-13 11:22:37.012410] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.607 [2024-12-13 11:22:37.012416] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.607 [2024-12-13 11:22:37.022791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.607 qpair failed and we were unable to recover it. 00:28:16.607 [2024-12-13 11:22:37.032500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.032538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.032552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.032559] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.032565] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.042851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.052539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.052582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.052595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.052601] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.052607] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.062685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.072573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.072607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.072621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.072628] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.072634] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.082987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.092535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.092574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.092590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.092597] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.092603] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.102849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.112607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.112647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.112661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.112668] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.112675] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.122860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.132633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.132667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.132680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.132687] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.132694] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.143069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.152696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.152734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.152748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.152755] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.152761] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.608 [2024-12-13 11:22:37.163103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.608 qpair failed and we were unable to recover it. 00:28:16.608 [2024-12-13 11:22:37.172668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.608 [2024-12-13 11:22:37.172708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.608 [2024-12-13 11:22:37.172721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.608 [2024-12-13 11:22:37.172728] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.608 [2024-12-13 11:22:37.172737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.183032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.192904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.192944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.192958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.192965] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.192971] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.203056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.212854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.212888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.212901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.212908] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.212914] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.223165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.232949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.232987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.233000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.233007] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.233013] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.243211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.252993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.253032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.253046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.253053] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.253059] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.263471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.273078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.273122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.273135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.273142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.273148] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.283518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.293086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.293128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.293141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.293148] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.293154] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.303347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.313214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.313251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.313264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.313276] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.313281] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.323498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.333254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.333300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.333315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.333321] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.333327] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.343496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.353341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.353377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.353391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.353400] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.353406] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.868 [2024-12-13 11:22:37.363656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.868 qpair failed and we were unable to recover it. 00:28:16.868 [2024-12-13 11:22:37.373423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.868 [2024-12-13 11:22:37.373461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.868 [2024-12-13 11:22:37.373474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.868 [2024-12-13 11:22:37.373481] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.868 [2024-12-13 11:22:37.373487] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.869 [2024-12-13 11:22:37.383506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.869 qpair failed and we were unable to recover it. 00:28:16.869 [2024-12-13 11:22:37.393466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.869 [2024-12-13 11:22:37.393505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.869 [2024-12-13 11:22:37.393518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.869 [2024-12-13 11:22:37.393525] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.869 [2024-12-13 11:22:37.393531] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.869 [2024-12-13 11:22:37.403865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.869 qpair failed and we were unable to recover it. 00:28:16.869 [2024-12-13 11:22:37.413603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.869 [2024-12-13 11:22:37.413640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.869 [2024-12-13 11:22:37.413654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.869 [2024-12-13 11:22:37.413661] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.869 [2024-12-13 11:22:37.413666] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:16.869 [2024-12-13 11:22:37.423679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:16.869 qpair failed and we were unable to recover it. 00:28:16.869 [2024-12-13 11:22:37.433579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.869 [2024-12-13 11:22:37.433615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.869 [2024-12-13 11:22:37.433628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.869 [2024-12-13 11:22:37.433635] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.869 [2024-12-13 11:22:37.433640] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.128 [2024-12-13 11:22:37.443834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.128 qpair failed and we were unable to recover it. 00:28:17.128 [2024-12-13 11:22:37.453526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.128 [2024-12-13 11:22:37.453566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.128 [2024-12-13 11:22:37.453579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.128 [2024-12-13 11:22:37.453586] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.128 [2024-12-13 11:22:37.453592] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.463903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.473670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.473701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.473715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.473722] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.473728] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.483984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.493796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.493832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.493846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.493853] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.493858] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.504008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.513789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.513826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.513839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.513845] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.513851] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.524197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.534008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.534041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.534058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.534064] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.534070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.544164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.554008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.554046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.554059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.554066] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.554071] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.564250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.574025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.574064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.574078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.574085] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.574091] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.584197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.594017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.594053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.594066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.594072] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.594078] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.604332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.614007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.614048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.614061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.614067] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.614073] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.624291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.634174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.634214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.634227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.634235] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.634240] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.644421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.654200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.654235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.654249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.654256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.654262] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.664417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.674301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.674336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.674349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.674356] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.674362] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.129 [2024-12-13 11:22:37.684376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.129 qpair failed and we were unable to recover it. 00:28:17.129 [2024-12-13 11:22:37.694291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.129 [2024-12-13 11:22:37.694325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.129 [2024-12-13 11:22:37.694339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.129 [2024-12-13 11:22:37.694345] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.129 [2024-12-13 11:22:37.694352] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.389 [2024-12-13 11:22:37.704580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.389 qpair failed and we were unable to recover it. 00:28:17.389 [2024-12-13 11:22:37.714447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.389 [2024-12-13 11:22:37.714489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.389 [2024-12-13 11:22:37.714502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.389 [2024-12-13 11:22:37.714509] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.389 [2024-12-13 11:22:37.714515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.389 [2024-12-13 11:22:37.724739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.389 qpair failed and we were unable to recover it. 00:28:17.389 [2024-12-13 11:22:37.734505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.389 [2024-12-13 11:22:37.734544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.389 [2024-12-13 11:22:37.734557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.389 [2024-12-13 11:22:37.734564] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.389 [2024-12-13 11:22:37.734570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.389 [2024-12-13 11:22:37.744760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.389 qpair failed and we were unable to recover it. 00:28:17.389 [2024-12-13 11:22:37.754549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.389 [2024-12-13 11:22:37.754594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.389 [2024-12-13 11:22:37.754608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.389 [2024-12-13 11:22:37.754615] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.389 [2024-12-13 11:22:37.754621] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.389 [2024-12-13 11:22:37.764785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.389 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.774565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.774602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.774615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.774622] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.774628] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.784786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.794647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.794679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.794693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.794703] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.794709] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.804861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.814746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.814783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.814796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.814803] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.814809] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.824966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.834686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.834728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.834742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.834748] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.834754] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.844887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.854687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.854723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.854736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.854743] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.854748] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.864978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.874810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.874849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.874863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.874870] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.874876] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.885006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.894814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.894851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.894865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.894872] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.894878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.905081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.914842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.914876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.914889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.914896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.914902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.925100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.934950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.934989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.935002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.935009] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.935015] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.390 [2024-12-13 11:22:37.945146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.390 qpair failed and we were unable to recover it. 00:28:17.390 [2024-12-13 11:22:37.955028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.390 [2024-12-13 11:22:37.955064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.390 [2024-12-13 11:22:37.955077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.390 [2024-12-13 11:22:37.955084] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.390 [2024-12-13 11:22:37.955089] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:37.965361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:37.975087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:37.975123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:37.975140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:37.975146] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:37.975152] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:37.985481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:37.995045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:37.995085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:37.995098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:37.995105] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:37.995111] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.005487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:38.015236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:38.015275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:38.015288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:38.015295] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:38.015301] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.025490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:38.035230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:38.035262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:38.035279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:38.035286] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:38.035292] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.045609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:38.055317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:38.055355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:38.055370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:38.055377] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:38.055383] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.065544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:38.075449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:38.075495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:38.075509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:38.075516] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:38.075522] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.085678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:38.095370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:38.095411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:38.095425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:38.095431] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:38.095437] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.105779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.650 qpair failed and we were unable to recover it. 00:28:17.650 [2024-12-13 11:22:38.115444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.650 [2024-12-13 11:22:38.115484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.650 [2024-12-13 11:22:38.115498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.650 [2024-12-13 11:22:38.115504] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.650 [2024-12-13 11:22:38.115510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.650 [2024-12-13 11:22:38.125782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.651 qpair failed and we were unable to recover it. 00:28:17.651 [2024-12-13 11:22:38.135469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.651 [2024-12-13 11:22:38.135509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.651 [2024-12-13 11:22:38.135522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.651 [2024-12-13 11:22:38.135530] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.651 [2024-12-13 11:22:38.135537] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.651 [2024-12-13 11:22:38.145846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.651 qpair failed and we were unable to recover it. 00:28:17.651 [2024-12-13 11:22:38.155648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.651 [2024-12-13 11:22:38.155691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.651 [2024-12-13 11:22:38.155706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.651 [2024-12-13 11:22:38.155713] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.651 [2024-12-13 11:22:38.155720] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.651 [2024-12-13 11:22:38.165811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.651 qpair failed and we were unable to recover it. 00:28:17.651 [2024-12-13 11:22:38.175608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.651 [2024-12-13 11:22:38.175653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.651 [2024-12-13 11:22:38.175666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.651 [2024-12-13 11:22:38.175673] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.651 [2024-12-13 11:22:38.175679] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.651 [2024-12-13 11:22:38.185884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.651 qpair failed and we were unable to recover it. 00:28:17.651 [2024-12-13 11:22:38.195712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.651 [2024-12-13 11:22:38.195751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.651 [2024-12-13 11:22:38.195765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.651 [2024-12-13 11:22:38.195771] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.651 [2024-12-13 11:22:38.195777] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.651 [2024-12-13 11:22:38.205917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.651 qpair failed and we were unable to recover it. 00:28:17.651 [2024-12-13 11:22:38.215647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.651 [2024-12-13 11:22:38.215685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.651 [2024-12-13 11:22:38.215698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.651 [2024-12-13 11:22:38.215705] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.651 [2024-12-13 11:22:38.215711] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.226165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.235806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.235846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.235860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.235867] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.235875] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.246162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.255930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.255970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.255983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.255990] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.255996] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.266191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.275946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.275983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.275996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.276002] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.276008] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.286123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.295986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.296025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.296040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.296047] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.296052] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.306301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.316018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.316056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.316070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.316076] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.316082] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.326128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.335941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.335980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.335993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.336000] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.336006] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.346464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.356171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.911 [2024-12-13 11:22:38.356210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.911 [2024-12-13 11:22:38.356224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.911 [2024-12-13 11:22:38.356231] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.911 [2024-12-13 11:22:38.356237] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.911 [2024-12-13 11:22:38.366565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.911 qpair failed and we were unable to recover it. 00:28:17.911 [2024-12-13 11:22:38.376190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.912 [2024-12-13 11:22:38.376227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.912 [2024-12-13 11:22:38.376240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.912 [2024-12-13 11:22:38.376247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.912 [2024-12-13 11:22:38.376254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.912 [2024-12-13 11:22:38.386452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.912 qpair failed and we were unable to recover it. 00:28:17.912 [2024-12-13 11:22:38.396249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.912 [2024-12-13 11:22:38.396293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.912 [2024-12-13 11:22:38.396307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.912 [2024-12-13 11:22:38.396314] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.912 [2024-12-13 11:22:38.396321] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.912 [2024-12-13 11:22:38.406573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.912 qpair failed and we were unable to recover it. 00:28:17.912 [2024-12-13 11:22:38.416236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.912 [2024-12-13 11:22:38.416276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.912 [2024-12-13 11:22:38.416292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.912 [2024-12-13 11:22:38.416299] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.912 [2024-12-13 11:22:38.416305] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.912 [2024-12-13 11:22:38.426738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.912 qpair failed and we were unable to recover it. 00:28:17.912 [2024-12-13 11:22:38.436321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.912 [2024-12-13 11:22:38.436357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.912 [2024-12-13 11:22:38.436371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.912 [2024-12-13 11:22:38.436378] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.912 [2024-12-13 11:22:38.436384] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.912 [2024-12-13 11:22:38.446855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.912 qpair failed and we were unable to recover it. 00:28:17.912 [2024-12-13 11:22:38.456468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.912 [2024-12-13 11:22:38.456506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.912 [2024-12-13 11:22:38.456520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.912 [2024-12-13 11:22:38.456526] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.912 [2024-12-13 11:22:38.456532] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:17.912 [2024-12-13 11:22:38.466658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:17.912 qpair failed and we were unable to recover it. 00:28:17.912 [2024-12-13 11:22:38.476461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.912 [2024-12-13 11:22:38.476500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.912 [2024-12-13 11:22:38.476514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.912 [2024-12-13 11:22:38.476520] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.912 [2024-12-13 11:22:38.476527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.486737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.496474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.496511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.496525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.496532] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.496538] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.506822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.516490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.516528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.516541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.516548] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.516554] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.527016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.536553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.536590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.536603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.536610] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.536616] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.547037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.556726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.556768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.556781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.556788] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.556794] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.567029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.576708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.576744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.576758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.576765] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.576771] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.586891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.596867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.596907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.596924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.596930] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.596936] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.607129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.616827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.616862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.616877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.616884] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.616890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.627033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.636915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.636953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.636967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.636974] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.636980] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.647244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.656930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.656964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.656977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.656984] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.656990] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.667237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.677064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.677103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.677116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.677123] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.677132] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.687308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.697133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.697169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.697183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.172 [2024-12-13 11:22:38.697190] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.172 [2024-12-13 11:22:38.697196] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.172 [2024-12-13 11:22:38.707439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.172 qpair failed and we were unable to recover it. 00:28:18.172 [2024-12-13 11:22:38.717270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.172 [2024-12-13 11:22:38.717311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.172 [2024-12-13 11:22:38.717325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.173 [2024-12-13 11:22:38.717332] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.173 [2024-12-13 11:22:38.717338] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.173 [2024-12-13 11:22:38.727284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.173 qpair failed and we were unable to recover it. 00:28:18.173 [2024-12-13 11:22:38.737195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.173 [2024-12-13 11:22:38.737235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.173 [2024-12-13 11:22:38.737249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.173 [2024-12-13 11:22:38.737256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.173 [2024-12-13 11:22:38.737262] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.432 [2024-12-13 11:22:38.747565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.432 qpair failed and we were unable to recover it. 00:28:18.432 [2024-12-13 11:22:38.757366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.432 [2024-12-13 11:22:38.757405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.432 [2024-12-13 11:22:38.757419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.432 [2024-12-13 11:22:38.757425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.432 [2024-12-13 11:22:38.757431] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:18.432 [2024-12-13 11:22:38.767685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.432 qpair failed and we were unable to recover it. 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Read completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 Write completed with error (sct=0, sc=8) 00:28:19.369 starting I/O failed 00:28:19.369 [2024-12-13 11:22:39.772706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.369 [2024-12-13 11:22:39.779994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.369 [2024-12-13 11:22:39.780038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.369 [2024-12-13 11:22:39.780055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.369 [2024-12-13 11:22:39.780063] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.369 [2024-12-13 11:22:39.780069] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.369 [2024-12-13 11:22:39.790617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.369 qpair failed and we were unable to recover it. 00:28:19.369 [2024-12-13 11:22:39.800383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.369 [2024-12-13 11:22:39.800421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.369 [2024-12-13 11:22:39.800437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.369 [2024-12-13 11:22:39.800444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.369 [2024-12-13 11:22:39.800450] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.369 [2024-12-13 11:22:39.810666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.369 qpair failed and we were unable to recover it. 00:28:19.369 [2024-12-13 11:22:39.820371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.369 [2024-12-13 11:22:39.820413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.369 [2024-12-13 11:22:39.820428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.369 [2024-12-13 11:22:39.820436] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.369 [2024-12-13 11:22:39.820441] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.369 [2024-12-13 11:22:39.830833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.369 qpair failed and we were unable to recover it. 00:28:19.369 [2024-12-13 11:22:39.840432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.370 [2024-12-13 11:22:39.840472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.370 [2024-12-13 11:22:39.840487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.370 [2024-12-13 11:22:39.840494] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.370 [2024-12-13 11:22:39.840500] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.370 [2024-12-13 11:22:39.850918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.370 qpair failed and we were unable to recover it. 00:28:19.370 [2024-12-13 11:22:39.860478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.370 [2024-12-13 11:22:39.860516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.370 [2024-12-13 11:22:39.860531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.370 [2024-12-13 11:22:39.860538] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.370 [2024-12-13 11:22:39.860544] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.370 [2024-12-13 11:22:39.870710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.370 qpair failed and we were unable to recover it. 00:28:19.370 [2024-12-13 11:22:39.880481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.370 [2024-12-13 11:22:39.880522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.370 [2024-12-13 11:22:39.880538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.370 [2024-12-13 11:22:39.880544] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.370 [2024-12-13 11:22:39.880550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.370 [2024-12-13 11:22:39.890907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.370 qpair failed and we were unable to recover it. 00:28:19.370 [2024-12-13 11:22:39.900652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.370 [2024-12-13 11:22:39.900687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.370 [2024-12-13 11:22:39.900702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.370 [2024-12-13 11:22:39.900709] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.370 [2024-12-13 11:22:39.900718] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.370 [2024-12-13 11:22:39.910854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.370 qpair failed and we were unable to recover it. 00:28:19.370 [2024-12-13 11:22:39.920683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.370 [2024-12-13 11:22:39.920723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.370 [2024-12-13 11:22:39.920739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.370 [2024-12-13 11:22:39.920746] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.370 [2024-12-13 11:22:39.920752] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.370 [2024-12-13 11:22:39.930972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.370 qpair failed and we were unable to recover it. 00:28:19.629 [2024-12-13 11:22:39.940669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.629 [2024-12-13 11:22:39.940705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.629 [2024-12-13 11:22:39.940720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.629 [2024-12-13 11:22:39.940726] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:39.940733] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:39.951039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:39.960702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:39.960743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:39.960757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:39.960764] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:39.960770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:39.971286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:39.980841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:39.980887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:39.980901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:39.980908] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:39.980913] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:39.991112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.000835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.000875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.000889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.000896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.000902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.011257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.020881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.020923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.020939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.020946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.020952] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.031286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.040987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.041033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.041054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.041065] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.041073] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.051273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.060943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.060989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.061009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.061016] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.061023] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.071403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.081095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.081133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.081152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.081159] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.081165] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.091444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.101157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.101197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.101211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.101218] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.101224] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.111445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.121211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.121251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.121270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.121278] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.121283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.131523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.141321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.141354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.141371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.141378] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.141384] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.151518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.161302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.161333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.161348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.161355] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.161361] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.171687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.630 [2024-12-13 11:22:40.181450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.630 [2024-12-13 11:22:40.181488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.630 [2024-12-13 11:22:40.181503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.630 [2024-12-13 11:22:40.181510] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.630 [2024-12-13 11:22:40.181515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.630 [2024-12-13 11:22:40.191768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.630 qpair failed and we were unable to recover it. 00:28:19.890 [2024-12-13 11:22:40.201390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.890 [2024-12-13 11:22:40.201426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.890 [2024-12-13 11:22:40.201441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.890 [2024-12-13 11:22:40.201447] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.890 [2024-12-13 11:22:40.201453] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.890 [2024-12-13 11:22:40.211816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.890 qpair failed and we were unable to recover it. 00:28:19.890 [2024-12-13 11:22:40.221469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.890 [2024-12-13 11:22:40.221503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.890 [2024-12-13 11:22:40.221517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.890 [2024-12-13 11:22:40.221524] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.890 [2024-12-13 11:22:40.221530] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.890 [2024-12-13 11:22:40.231962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.241588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.241626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.241640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.241647] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.241653] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.251926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.261589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.261627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.261645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.261652] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.261658] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.272043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.281723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.281762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.281777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.281784] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.281791] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.292107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.301618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.301659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.301674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.301681] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.301687] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.312012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.321776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.321811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.321825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.321832] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.321838] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.332114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.341773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.341813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.341827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.341834] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.341854] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.352241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.361893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.361932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.361946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.361953] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.361959] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.372181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.382000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.382034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.382049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.382055] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.382061] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.392377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.402061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.402102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.402117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.402124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.402132] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.412372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.422040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.422076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.422090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.422097] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.422105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.432430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:19.891 [2024-12-13 11:22:40.442207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.891 [2024-12-13 11:22:40.442245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.891 [2024-12-13 11:22:40.442260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.891 [2024-12-13 11:22:40.442270] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.891 [2024-12-13 11:22:40.442277] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:19.891 [2024-12-13 11:22:40.452494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.891 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.462200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.462240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.462254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.462261] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.462271] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.472547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.482295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.482334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.482348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.482355] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.482361] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.492589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.502378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.502416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.502431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.502438] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.502444] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.512693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.522380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.522422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.522436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.522446] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.522452] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.532744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.542497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.542534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.542548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.542555] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.542561] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.552875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.562429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.562467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.562482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.562488] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.562494] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.572765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.582519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.582557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.582572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.582579] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.582585] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.592955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.602722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.602758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.602773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.602779] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.602785] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.613093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.622619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.622662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.622675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.622682] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.622688] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.633144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.642752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.642790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.642803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.642810] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.642816] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.652974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.662737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.662776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.662790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.662798] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.662803] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.673248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.682816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.682858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.152 [2024-12-13 11:22:40.682872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.152 [2024-12-13 11:22:40.682879] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.152 [2024-12-13 11:22:40.682885] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.152 [2024-12-13 11:22:40.693123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.152 qpair failed and we were unable to recover it. 00:28:20.152 [2024-12-13 11:22:40.702863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.152 [2024-12-13 11:22:40.702901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.153 [2024-12-13 11:22:40.702918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.153 [2024-12-13 11:22:40.702925] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.153 [2024-12-13 11:22:40.702931] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.153 [2024-12-13 11:22:40.713294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.153 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.722957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.722998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.723012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.723019] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.723025] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.733215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.743122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.743159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.743173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.743181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.743187] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.753472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.763072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.763115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.763129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.763136] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.763142] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.773491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.783152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.783191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.783206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.783213] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.783221] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.793395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.803168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.803200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.803214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.803221] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.803227] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.813491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.823231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.823276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.823290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.823297] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.823304] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.833694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.843333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.843371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.843387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.843394] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.843400] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.853719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.863467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.863507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.863522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.863528] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.863535] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.873588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.413 [2024-12-13 11:22:40.883428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.413 [2024-12-13 11:22:40.883471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.413 [2024-12-13 11:22:40.883485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.413 [2024-12-13 11:22:40.883492] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.413 [2024-12-13 11:22:40.883498] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.413 [2024-12-13 11:22:40.893557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.413 qpair failed and we were unable to recover it. 00:28:20.414 [2024-12-13 11:22:40.903581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.414 [2024-12-13 11:22:40.903618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.414 [2024-12-13 11:22:40.903632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.414 [2024-12-13 11:22:40.903639] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.414 [2024-12-13 11:22:40.903645] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.414 [2024-12-13 11:22:40.913869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-12-13 11:22:40.923642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.414 [2024-12-13 11:22:40.923675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.414 [2024-12-13 11:22:40.923690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.414 [2024-12-13 11:22:40.923696] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.414 [2024-12-13 11:22:40.923702] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.414 [2024-12-13 11:22:40.933927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-12-13 11:22:40.943730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.414 [2024-12-13 11:22:40.943768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.414 [2024-12-13 11:22:40.943783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.414 [2024-12-13 11:22:40.943790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.414 [2024-12-13 11:22:40.943796] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.414 [2024-12-13 11:22:40.954072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.414 [2024-12-13 11:22:40.963704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.414 [2024-12-13 11:22:40.963739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.414 [2024-12-13 11:22:40.963752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.414 [2024-12-13 11:22:40.963762] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.414 [2024-12-13 11:22:40.963769] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.414 [2024-12-13 11:22:40.973863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.414 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:40.983695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:40.983732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:40.983746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:40.983753] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:40.983759] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:40.994289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.003909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.003950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.003965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.003972] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.003978] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.014168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.023927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.023968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.023983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.023990] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.023996] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.034201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.044011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.044052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.044066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.044073] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.044079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.054215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.063995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.064031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.064045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.064052] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.064059] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.074211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.084025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.084063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.084077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.084084] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.084090] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.094356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.104126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.104159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.104174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.104181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.104187] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.114411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.124175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.124212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.124226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.124233] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.124239] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.134624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.144232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.144280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.144300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.144307] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.144313] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.154534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.164260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.164302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.674 [2024-12-13 11:22:41.164316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.674 [2024-12-13 11:22:41.164323] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.674 [2024-12-13 11:22:41.164329] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.674 [2024-12-13 11:22:41.174564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.674 qpair failed and we were unable to recover it. 00:28:20.674 [2024-12-13 11:22:41.184327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.674 [2024-12-13 11:22:41.184369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.675 [2024-12-13 11:22:41.184383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.675 [2024-12-13 11:22:41.184389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.675 [2024-12-13 11:22:41.184396] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.675 [2024-12-13 11:22:41.194568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.675 qpair failed and we were unable to recover it. 00:28:20.675 [2024-12-13 11:22:41.204462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.675 [2024-12-13 11:22:41.204494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.675 [2024-12-13 11:22:41.204509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.675 [2024-12-13 11:22:41.204516] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.675 [2024-12-13 11:22:41.204522] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.675 [2024-12-13 11:22:41.214662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.675 qpair failed and we were unable to recover it. 00:28:20.675 [2024-12-13 11:22:41.224410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.675 [2024-12-13 11:22:41.224451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.675 [2024-12-13 11:22:41.224465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.675 [2024-12-13 11:22:41.224472] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.675 [2024-12-13 11:22:41.224478] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.675 [2024-12-13 11:22:41.234751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.675 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.244455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.244492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.244506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.244513] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.244519] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.254813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.264681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.264714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.264728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.264735] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.264741] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.274838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.284599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.284633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.284648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.284655] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.284661] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.294918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.304646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.304684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.304697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.304704] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.304710] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.315090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.324770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.324815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.324829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.324836] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.324842] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.335058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.344748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.344783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.344798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.344806] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.344812] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.355141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.364816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.364853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.364867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.364874] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.364880] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.375114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.384944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.384980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.384994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.385001] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.385007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.395293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.404925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.404961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.404975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.404985] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.404991] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.415319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.425000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.425033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.425048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.425055] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.425061] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.435364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.445019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.445052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.445066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.445073] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.445080] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.455356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.465034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.465076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.465089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.465096] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.465102] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.475357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:20.935 [2024-12-13 11:22:41.485131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.935 [2024-12-13 11:22:41.485167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.935 [2024-12-13 11:22:41.485180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.935 [2024-12-13 11:22:41.485187] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.935 [2024-12-13 11:22:41.485193] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:20.935 [2024-12-13 11:22:41.495510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.935 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.505158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.505192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.505206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.505213] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.505219] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.515520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.525272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.525313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.525327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.525334] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.525340] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.535630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.545282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.545318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.545332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.545338] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.545344] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.555746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.565489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.565526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.565540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.565547] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.565554] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.575706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.585439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.585481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.585498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.585505] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.585511] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.595927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.605545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.605585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.605598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.605605] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.605611] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.615861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.625531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.625571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.625585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.625591] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.625597] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.635900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.645651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.645688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.645702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.645709] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.645715] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.655996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.665704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.665742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.665756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.665763] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.665769] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:21.195 [2024-12-13 11:22:41.675949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.195 qpair failed and we were unable to recover it. 00:28:21.195 [2024-12-13 11:22:41.685734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.195 [2024-12-13 11:22:41.685771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.195 [2024-12-13 11:22:41.685789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.195 [2024-12-13 11:22:41.685797] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.195 [2024-12-13 11:22:41.685803] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:21.196 [2024-12-13 11:22:41.696094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:21.196 qpair failed and we were unable to recover it. 00:28:21.196 [2024-12-13 11:22:41.705871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.196 [2024-12-13 11:22:41.705908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.196 [2024-12-13 11:22:41.705921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.196 [2024-12-13 11:22:41.705929] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.196 [2024-12-13 11:22:41.705935] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:21.196 [2024-12-13 11:22:41.716211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:21.196 qpair failed and we were unable to recover it. 00:28:21.196 [2024-12-13 11:22:41.716329] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:21.196 A controller has encountered a failure and is being reset. 00:28:21.196 [2024-12-13 11:22:41.726050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.196 [2024-12-13 11:22:41.726099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.196 [2024-12-13 11:22:41.726121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.196 [2024-12-13 11:22:41.726131] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.196 [2024-12-13 11:22:41.726139] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:21.196 [2024-12-13 11:22:41.736254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:21.196 qpair failed and we were unable to recover it. 00:28:21.196 [2024-12-13 11:22:41.745896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.196 [2024-12-13 11:22:41.745936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.196 [2024-12-13 11:22:41.745950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.196 [2024-12-13 11:22:41.745957] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.196 [2024-12-13 11:22:41.745962] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:21.196 [2024-12-13 11:22:41.756293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:21.196 qpair failed and we were unable to recover it. 00:28:21.196 [2024-12-13 11:22:41.756430] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:21.196 [2024-12-13 11:22:41.758384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:21.455 Controller properly reset. 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Write completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 Read completed with error (sct=0, sc=8) 00:28:22.392 starting I/O failed 00:28:22.392 [2024-12-13 11:22:42.771998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.392 Initializing NVMe Controllers 00:28:22.392 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.392 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:22.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:22.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:22.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:22.392 Initialization complete. Launching workers. 00:28:22.392 Starting thread on core 1 00:28:22.392 Starting thread on core 2 00:28:22.392 Starting thread on core 3 00:28:22.392 Starting thread on core 0 00:28:22.392 11:22:42 -- host/target_disconnect.sh@59 -- # sync 00:28:22.392 00:28:22.392 real 0m12.442s 00:28:22.392 user 0m27.857s 00:28:22.392 sys 0m2.376s 00:28:22.392 11:22:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:22.392 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:28:22.392 ************************************ 00:28:22.392 END TEST nvmf_target_disconnect_tc2 00:28:22.392 ************************************ 00:28:22.392 11:22:42 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:28:22.392 11:22:42 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:28:22.392 11:22:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:22.392 11:22:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:22.392 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:28:22.392 ************************************ 00:28:22.392 START TEST nvmf_target_disconnect_tc3 00:28:22.392 ************************************ 00:28:22.392 11:22:42 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:28:22.392 11:22:42 -- host/target_disconnect.sh@65 -- # reconnectpid=1783262 00:28:22.392 11:22:42 -- host/target_disconnect.sh@67 -- # sleep 2 00:28:22.392 11:22:42 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:28:22.392 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.297 11:22:44 -- host/target_disconnect.sh@68 -- # kill -9 1781903 00:28:24.556 11:22:44 -- host/target_disconnect.sh@70 -- # sleep 2 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Write completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 Read completed with error (sct=0, sc=8) 00:28:25.491 starting I/O failed 00:28:25.491 [2024-12-13 11:22:46.025095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:26.428 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1781903 Killed "${NVMF_APP[@]}" "$@" 00:28:26.428 11:22:46 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:28:26.428 11:22:46 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:26.428 11:22:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:26.428 11:22:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.428 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:28:26.428 11:22:46 -- nvmf/common.sh@469 -- # nvmfpid=1784025 00:28:26.428 11:22:46 -- nvmf/common.sh@470 -- # waitforlisten 1784025 00:28:26.428 11:22:46 -- common/autotest_common.sh@829 -- # '[' -z 1784025 ']' 00:28:26.428 11:22:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.428 11:22:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.428 11:22:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.428 11:22:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.428 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:28:26.428 11:22:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:26.428 [2024-12-13 11:22:46.915348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:26.428 [2024-12-13 11:22:46.915393] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.428 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.428 [2024-12-13 11:22:46.984604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Read completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 Write completed with error (sct=0, sc=8) 00:28:26.688 starting I/O failed 00:28:26.688 [2024-12-13 11:22:47.030074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.688 [2024-12-13 11:22:47.049639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:26.688 [2024-12-13 11:22:47.049744] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.688 [2024-12-13 11:22:47.049751] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.688 [2024-12-13 11:22:47.049757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.688 [2024-12-13 11:22:47.049868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:26.688 [2024-12-13 11:22:47.049978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:26.688 [2024-12-13 11:22:47.050084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:26.688 [2024-12-13 11:22:47.050086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:27.256 11:22:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.256 11:22:47 -- common/autotest_common.sh@862 -- # return 0 00:28:27.256 11:22:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:27.256 11:22:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:27.256 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.256 11:22:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.256 11:22:47 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:27.256 11:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.256 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.256 Malloc0 00:28:27.256 11:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.256 11:22:47 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:27.256 11:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.256 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.256 [2024-12-13 11:22:47.790078] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1633d60/0x163f700) succeed. 00:28:27.256 [2024-12-13 11:22:47.798409] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1635350/0x16bf740) succeed. 00:28:27.516 11:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.516 11:22:47 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:27.516 11:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.516 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.516 11:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.516 11:22:47 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:27.516 11:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.516 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.516 11:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.516 11:22:47 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:28:27.516 11:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.516 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.516 [2024-12-13 11:22:47.931096] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:27.516 11:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.516 11:22:47 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:28:27.516 11:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.516 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:28:27.516 11:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.516 11:22:47 -- host/target_disconnect.sh@73 -- # wait 1783262 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Write completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 Read completed with error (sct=0, sc=8) 00:28:27.516 starting I/O failed 00:28:27.516 [2024-12-13 11:22:48.035002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:27.516 [2024-12-13 11:22:48.036499] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:27.516 [2024-12-13 11:22:48.036521] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:27.516 [2024-12-13 11:22:48.036528] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:28.894 [2024-12-13 11:22:49.040384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:28.894 qpair failed and we were unable to recover it. 00:28:28.894 [2024-12-13 11:22:49.041683] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:28.894 [2024-12-13 11:22:49.041699] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:28.894 [2024-12-13 11:22:49.041705] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:29.830 [2024-12-13 11:22:50.045406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:29.830 qpair failed and we were unable to recover it. 00:28:29.830 [2024-12-13 11:22:50.046812] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:29.830 [2024-12-13 11:22:50.046827] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:29.830 [2024-12-13 11:22:50.046833] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:30.769 [2024-12-13 11:22:51.050684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.769 qpair failed and we were unable to recover it. 00:28:30.769 [2024-12-13 11:22:51.052004] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:30.769 [2024-12-13 11:22:51.052020] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:30.769 [2024-12-13 11:22:51.052026] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:31.705 [2024-12-13 11:22:52.055830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.705 qpair failed and we were unable to recover it. 00:28:31.705 [2024-12-13 11:22:52.057277] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:31.705 [2024-12-13 11:22:52.057292] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:31.705 [2024-12-13 11:22:52.057298] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:32.641 [2024-12-13 11:22:53.061008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.641 qpair failed and we were unable to recover it. 00:28:32.641 [2024-12-13 11:22:53.062283] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:32.641 [2024-12-13 11:22:53.062299] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:32.641 [2024-12-13 11:22:53.062305] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:33.576 [2024-12-13 11:22:54.066042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:33.576 qpair failed and we were unable to recover it. 00:28:33.576 [2024-12-13 11:22:54.067361] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:33.576 [2024-12-13 11:22:54.067376] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:33.576 [2024-12-13 11:22:54.067382] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:34.512 [2024-12-13 11:22:55.071085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.512 qpair failed and we were unable to recover it. 00:28:34.512 [2024-12-13 11:22:55.072509] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.512 [2024-12-13 11:22:55.072530] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.512 [2024-12-13 11:22:55.072536] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:35.888 [2024-12-13 11:22:56.076303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.888 qpair failed and we were unable to recover it. 00:28:35.888 [2024-12-13 11:22:56.077763] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:35.888 [2024-12-13 11:22:56.077778] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:35.888 [2024-12-13 11:22:56.077783] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:36.824 [2024-12-13 11:22:57.081580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.824 qpair failed and we were unable to recover it. 00:28:36.824 [2024-12-13 11:22:57.081705] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:36.824 A controller has encountered a failure and is being reset. 00:28:36.824 Resorting to new failover address 192.168.100.9 00:28:36.824 [2024-12-13 11:22:57.083237] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:36.824 [2024-12-13 11:22:57.083260] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:36.824 [2024-12-13 11:22:57.083273] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:37.819 [2024-12-13 11:22:58.087185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.819 qpair failed and we were unable to recover it. 00:28:37.819 [2024-12-13 11:22:58.088575] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:37.819 [2024-12-13 11:22:58.088589] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:37.819 [2024-12-13 11:22:58.088594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:38.810 [2024-12-13 11:22:59.092315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.810 qpair failed and we were unable to recover it. 00:28:38.810 [2024-12-13 11:22:59.092439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.810 [2024-12-13 11:22:59.092545] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:38.810 [2024-12-13 11:22:59.094374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:38.810 Controller properly reset. 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Write completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Write completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Write completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Write completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Write completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Read completed with error (sct=0, sc=8) 00:28:39.747 starting I/O failed 00:28:39.747 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Read completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Read completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Read completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Read completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Read completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Read completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 Write completed with error (sct=0, sc=8) 00:28:39.748 starting I/O failed 00:28:39.748 [2024-12-13 11:23:00.138053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.748 Initializing NVMe Controllers 00:28:39.748 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.748 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.748 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:39.748 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:39.748 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:39.748 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:39.748 Initialization complete. Launching workers. 00:28:39.748 Starting thread on core 1 00:28:39.748 Starting thread on core 2 00:28:39.748 Starting thread on core 3 00:28:39.748 Starting thread on core 0 00:28:39.748 11:23:00 -- host/target_disconnect.sh@74 -- # sync 00:28:39.748 00:28:39.748 real 0m17.321s 00:28:39.748 user 1m0.816s 00:28:39.748 sys 0m4.006s 00:28:39.748 11:23:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:39.748 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:39.748 ************************************ 00:28:39.748 END TEST nvmf_target_disconnect_tc3 00:28:39.748 ************************************ 00:28:39.748 11:23:00 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:39.748 11:23:00 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:28:39.748 11:23:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:39.748 11:23:00 -- nvmf/common.sh@116 -- # sync 00:28:39.748 11:23:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:39.748 11:23:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:39.748 11:23:00 -- nvmf/common.sh@119 -- # set +e 00:28:39.748 11:23:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:39.748 11:23:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:39.748 rmmod nvme_rdma 00:28:39.748 rmmod nvme_fabrics 00:28:39.748 11:23:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:39.748 11:23:00 -- nvmf/common.sh@123 -- # set -e 00:28:39.748 11:23:00 -- nvmf/common.sh@124 -- # return 0 00:28:39.748 11:23:00 -- nvmf/common.sh@477 -- # '[' -n 1784025 ']' 00:28:39.748 11:23:00 -- nvmf/common.sh@478 -- # killprocess 1784025 00:28:39.748 11:23:00 -- common/autotest_common.sh@936 -- # '[' -z 1784025 ']' 00:28:39.748 11:23:00 -- common/autotest_common.sh@940 -- # kill -0 1784025 00:28:39.748 11:23:00 -- common/autotest_common.sh@941 -- # uname 00:28:39.748 11:23:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:39.748 11:23:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1784025 00:28:40.007 11:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:28:40.007 11:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:28:40.007 11:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1784025' 00:28:40.007 killing process with pid 1784025 00:28:40.007 11:23:00 -- common/autotest_common.sh@955 -- # kill 1784025 00:28:40.007 11:23:00 -- common/autotest_common.sh@960 -- # wait 1784025 00:28:40.267 11:23:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:40.267 11:23:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:40.267 00:28:40.267 real 0m37.221s 00:28:40.267 user 2m24.955s 00:28:40.267 sys 0m11.171s 00:28:40.267 11:23:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:40.267 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.267 ************************************ 00:28:40.267 END TEST nvmf_target_disconnect 00:28:40.267 ************************************ 00:28:40.267 11:23:00 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:28:40.267 11:23:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.267 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.267 11:23:00 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:40.267 00:28:40.267 real 20m48.858s 00:28:40.267 user 69m1.898s 00:28:40.267 sys 4m7.155s 00:28:40.267 11:23:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:40.267 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.267 ************************************ 00:28:40.267 END TEST nvmf_rdma 00:28:40.267 ************************************ 00:28:40.267 11:23:00 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:40.267 11:23:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:40.267 11:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.267 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.267 ************************************ 00:28:40.267 START TEST spdkcli_nvmf_rdma 00:28:40.267 ************************************ 00:28:40.267 11:23:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:40.267 * Looking for test storage... 00:28:40.267 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:28:40.267 11:23:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:40.267 11:23:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:40.267 11:23:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:40.527 11:23:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:40.527 11:23:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:40.527 11:23:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:40.527 11:23:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:40.527 11:23:00 -- scripts/common.sh@335 -- # IFS=.-: 00:28:40.527 11:23:00 -- scripts/common.sh@335 -- # read -ra ver1 00:28:40.527 11:23:00 -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.527 11:23:00 -- scripts/common.sh@336 -- # read -ra ver2 00:28:40.527 11:23:00 -- scripts/common.sh@337 -- # local 'op=<' 00:28:40.527 11:23:00 -- scripts/common.sh@339 -- # ver1_l=2 00:28:40.527 11:23:00 -- scripts/common.sh@340 -- # ver2_l=1 00:28:40.527 11:23:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:40.527 11:23:00 -- scripts/common.sh@343 -- # case "$op" in 00:28:40.527 11:23:00 -- scripts/common.sh@344 -- # : 1 00:28:40.527 11:23:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:40.527 11:23:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.527 11:23:00 -- scripts/common.sh@364 -- # decimal 1 00:28:40.527 11:23:00 -- scripts/common.sh@352 -- # local d=1 00:28:40.527 11:23:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.527 11:23:00 -- scripts/common.sh@354 -- # echo 1 00:28:40.527 11:23:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:40.527 11:23:00 -- scripts/common.sh@365 -- # decimal 2 00:28:40.527 11:23:00 -- scripts/common.sh@352 -- # local d=2 00:28:40.527 11:23:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.527 11:23:00 -- scripts/common.sh@354 -- # echo 2 00:28:40.527 11:23:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:40.527 11:23:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:40.527 11:23:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:40.527 11:23:00 -- scripts/common.sh@367 -- # return 0 00:28:40.527 11:23:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.527 11:23:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:40.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.527 --rc genhtml_branch_coverage=1 00:28:40.527 --rc genhtml_function_coverage=1 00:28:40.527 --rc genhtml_legend=1 00:28:40.527 --rc geninfo_all_blocks=1 00:28:40.527 --rc geninfo_unexecuted_blocks=1 00:28:40.527 00:28:40.527 ' 00:28:40.527 11:23:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:40.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.527 --rc genhtml_branch_coverage=1 00:28:40.527 --rc genhtml_function_coverage=1 00:28:40.527 --rc genhtml_legend=1 00:28:40.527 --rc geninfo_all_blocks=1 00:28:40.527 --rc geninfo_unexecuted_blocks=1 00:28:40.527 00:28:40.527 ' 00:28:40.527 11:23:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:40.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.527 --rc genhtml_branch_coverage=1 00:28:40.527 --rc genhtml_function_coverage=1 00:28:40.527 --rc genhtml_legend=1 00:28:40.527 --rc geninfo_all_blocks=1 00:28:40.527 --rc geninfo_unexecuted_blocks=1 00:28:40.527 00:28:40.527 ' 00:28:40.527 11:23:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:40.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.527 --rc genhtml_branch_coverage=1 00:28:40.527 --rc genhtml_function_coverage=1 00:28:40.527 --rc genhtml_legend=1 00:28:40.527 --rc geninfo_all_blocks=1 00:28:40.527 --rc geninfo_unexecuted_blocks=1 00:28:40.527 00:28:40.527 ' 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:28:40.527 11:23:00 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:40.527 11:23:00 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.527 11:23:00 -- nvmf/common.sh@7 -- # uname -s 00:28:40.527 11:23:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.527 11:23:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.527 11:23:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.527 11:23:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.527 11:23:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.527 11:23:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.527 11:23:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.527 11:23:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.527 11:23:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.527 11:23:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.527 11:23:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:28:40.527 11:23:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:28:40.527 11:23:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.527 11:23:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.527 11:23:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.527 11:23:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:40.527 11:23:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.527 11:23:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.527 11:23:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.527 11:23:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.527 11:23:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.527 11:23:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.527 11:23:00 -- paths/export.sh@5 -- # export PATH 00:28:40.527 11:23:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.527 11:23:00 -- nvmf/common.sh@46 -- # : 0 00:28:40.527 11:23:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:40.527 11:23:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:40.527 11:23:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:40.527 11:23:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.527 11:23:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.527 11:23:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:40.527 11:23:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:40.527 11:23:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:40.527 11:23:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:40.527 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.527 11:23:00 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:40.527 11:23:00 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1786586 00:28:40.528 11:23:00 -- spdkcli/common.sh@34 -- # waitforlisten 1786586 00:28:40.528 11:23:00 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:40.528 11:23:00 -- common/autotest_common.sh@829 -- # '[' -z 1786586 ']' 00:28:40.528 11:23:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.528 11:23:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.528 11:23:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.528 11:23:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.528 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:28:40.528 [2024-12-13 11:23:00.951977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:40.528 [2024-12-13 11:23:00.952024] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1786586 ] 00:28:40.528 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.528 [2024-12-13 11:23:01.001797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:40.528 [2024-12-13 11:23:01.071548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:40.528 [2024-12-13 11:23:01.071753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.528 [2024-12-13 11:23:01.071756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.464 11:23:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.464 11:23:01 -- common/autotest_common.sh@862 -- # return 0 00:28:41.464 11:23:01 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:41.464 11:23:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.464 11:23:01 -- common/autotest_common.sh@10 -- # set +x 00:28:41.464 11:23:01 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:41.464 11:23:01 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:28:41.464 11:23:01 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:28:41.464 11:23:01 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:41.464 11:23:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.464 11:23:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:41.464 11:23:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:41.464 11:23:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:41.464 11:23:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.464 11:23:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:41.464 11:23:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.464 11:23:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:41.464 11:23:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:41.464 11:23:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:41.464 11:23:01 -- common/autotest_common.sh@10 -- # set +x 00:28:48.031 11:23:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:48.031 11:23:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:48.031 11:23:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:48.031 11:23:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:48.031 11:23:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:48.031 11:23:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:48.031 11:23:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:48.031 11:23:07 -- nvmf/common.sh@294 -- # net_devs=() 00:28:48.031 11:23:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:48.031 11:23:07 -- nvmf/common.sh@295 -- # e810=() 00:28:48.031 11:23:07 -- nvmf/common.sh@295 -- # local -ga e810 00:28:48.031 11:23:07 -- nvmf/common.sh@296 -- # x722=() 00:28:48.031 11:23:07 -- nvmf/common.sh@296 -- # local -ga x722 00:28:48.031 11:23:07 -- nvmf/common.sh@297 -- # mlx=() 00:28:48.031 11:23:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:48.031 11:23:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.031 11:23:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:48.031 11:23:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:48.031 11:23:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:48.031 11:23:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:48.031 11:23:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:48.031 11:23:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:48.031 11:23:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:28:48.031 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:28:48.031 11:23:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:48.031 11:23:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:48.031 11:23:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:28:48.031 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:28:48.031 11:23:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:48.031 11:23:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:48.031 11:23:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:48.031 11:23:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:48.031 11:23:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.031 11:23:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:48.031 11:23:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.031 11:23:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:28:48.031 Found net devices under 0000:18:00.0: mlx_0_0 00:28:48.031 11:23:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.031 11:23:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.032 11:23:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:48.032 11:23:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.032 11:23:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:28:48.032 Found net devices under 0000:18:00.1: mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.032 11:23:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:48.032 11:23:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:48.032 11:23:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:48.032 11:23:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:48.032 11:23:07 -- nvmf/common.sh@57 -- # uname 00:28:48.032 11:23:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:48.032 11:23:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:48.032 11:23:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:48.032 11:23:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:48.032 11:23:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:48.032 11:23:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:48.032 11:23:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:48.032 11:23:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:48.032 11:23:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:48.032 11:23:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:48.032 11:23:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:48.032 11:23:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:48.032 11:23:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:48.032 11:23:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:48.032 11:23:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:48.032 11:23:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:48.032 11:23:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@104 -- # continue 2 00:28:48.032 11:23:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@104 -- # continue 2 00:28:48.032 11:23:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:48.032 11:23:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:48.032 11:23:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:48.032 11:23:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:48.032 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:48.032 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:28:48.032 altname enp24s0f0np0 00:28:48.032 altname ens785f0np0 00:28:48.032 inet 192.168.100.8/24 scope global mlx_0_0 00:28:48.032 valid_lft forever preferred_lft forever 00:28:48.032 11:23:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:48.032 11:23:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:48.032 11:23:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:48.032 11:23:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:48.032 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:48.032 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:28:48.032 altname enp24s0f1np1 00:28:48.032 altname ens785f1np1 00:28:48.032 inet 192.168.100.9/24 scope global mlx_0_1 00:28:48.032 valid_lft forever preferred_lft forever 00:28:48.032 11:23:07 -- nvmf/common.sh@410 -- # return 0 00:28:48.032 11:23:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:48.032 11:23:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:48.032 11:23:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:48.032 11:23:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:48.032 11:23:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:48.032 11:23:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:48.032 11:23:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:48.032 11:23:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:48.032 11:23:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:48.032 11:23:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@104 -- # continue 2 00:28:48.032 11:23:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:48.032 11:23:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:48.032 11:23:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@104 -- # continue 2 00:28:48.032 11:23:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:48.032 11:23:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:48.032 11:23:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:48.032 11:23:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:48.032 11:23:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:48.032 11:23:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:48.032 192.168.100.9' 00:28:48.032 11:23:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:48.032 192.168.100.9' 00:28:48.032 11:23:07 -- nvmf/common.sh@445 -- # head -n 1 00:28:48.032 11:23:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:48.032 11:23:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:48.032 192.168.100.9' 00:28:48.032 11:23:07 -- nvmf/common.sh@446 -- # tail -n +2 00:28:48.032 11:23:07 -- nvmf/common.sh@446 -- # head -n 1 00:28:48.032 11:23:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:48.032 11:23:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:48.032 11:23:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:48.032 11:23:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:48.032 11:23:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:48.032 11:23:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:48.032 11:23:07 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:28:48.032 11:23:07 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:48.032 11:23:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:48.032 11:23:07 -- common/autotest_common.sh@10 -- # set +x 00:28:48.032 11:23:07 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:48.032 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:48.032 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:48.032 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:48.032 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:48.032 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:48.032 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:48.032 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:48.032 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:48.032 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:48.032 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:48.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:48.033 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:48.033 ' 00:28:48.033 [2024-12-13 11:23:07.873971] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:49.410 [2024-12-13 11:23:09.931008] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ffaa30/0x2010000) succeed. 00:28:49.410 [2024-12-13 11:23:09.939803] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ffc110/0x2090040) succeed. 00:28:50.787 [2024-12-13 11:23:11.290971] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:28:53.321 [2024-12-13 11:23:13.710515] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:28:55.852 [2024-12-13 11:23:15.817424] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:28:57.229 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:57.229 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:57.229 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:57.229 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:57.229 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:57.229 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:57.229 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:57.229 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:57.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:57.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:57.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:57.229 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:57.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:57.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:57.229 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:57.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:57.230 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:57.230 11:23:17 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:57.230 11:23:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.230 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:57.230 11:23:17 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:57.230 11:23:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:57.230 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:57.230 11:23:17 -- spdkcli/nvmf.sh@69 -- # check_match 00:28:57.230 11:23:17 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:57.489 11:23:17 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:57.489 11:23:17 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:57.489 11:23:17 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:57.489 11:23:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.489 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:57.489 11:23:18 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:57.489 11:23:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:57.489 11:23:18 -- common/autotest_common.sh@10 -- # set +x 00:28:57.489 11:23:18 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:28:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:28:57.489 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:57.489 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:57.489 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:57.489 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:57.489 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:57.489 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:57.489 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:57.489 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:57.489 ' 00:29:02.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:02.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:02.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:02.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:02.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:02.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:02.763 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:02.763 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:02.763 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:02.763 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:02.763 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:02.763 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:02.763 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:02.763 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:02.763 11:23:22 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:02.763 11:23:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.763 11:23:22 -- common/autotest_common.sh@10 -- # set +x 00:29:02.763 11:23:23 -- spdkcli/nvmf.sh@90 -- # killprocess 1786586 00:29:02.763 11:23:23 -- common/autotest_common.sh@936 -- # '[' -z 1786586 ']' 00:29:02.763 11:23:23 -- common/autotest_common.sh@940 -- # kill -0 1786586 00:29:02.763 11:23:23 -- common/autotest_common.sh@941 -- # uname 00:29:02.763 11:23:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:02.763 11:23:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1786586 00:29:02.763 11:23:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:02.763 11:23:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:02.763 11:23:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1786586' 00:29:02.763 killing process with pid 1786586 00:29:02.763 11:23:23 -- common/autotest_common.sh@955 -- # kill 1786586 00:29:02.763 [2024-12-13 11:23:23.087507] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:02.763 11:23:23 -- common/autotest_common.sh@960 -- # wait 1786586 00:29:03.023 11:23:23 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:03.023 11:23:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:03.023 11:23:23 -- nvmf/common.sh@116 -- # sync 00:29:03.023 11:23:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:03.023 11:23:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:03.023 11:23:23 -- nvmf/common.sh@119 -- # set +e 00:29:03.023 11:23:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:03.023 11:23:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:03.023 rmmod nvme_rdma 00:29:03.023 rmmod nvme_fabrics 00:29:03.023 11:23:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:03.023 11:23:23 -- nvmf/common.sh@123 -- # set -e 00:29:03.023 11:23:23 -- nvmf/common.sh@124 -- # return 0 00:29:03.023 11:23:23 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:03.023 11:23:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:03.023 11:23:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:03.023 00:29:03.023 real 0m22.661s 00:29:03.023 user 0m48.968s 00:29:03.023 sys 0m5.143s 00:29:03.023 11:23:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:03.023 11:23:23 -- common/autotest_common.sh@10 -- # set +x 00:29:03.023 ************************************ 00:29:03.023 END TEST spdkcli_nvmf_rdma 00:29:03.023 ************************************ 00:29:03.023 11:23:23 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:03.023 11:23:23 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:03.023 11:23:23 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:03.023 11:23:23 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:03.023 11:23:23 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:03.023 11:23:23 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:29:03.023 11:23:23 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:29:03.023 11:23:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:03.023 11:23:23 -- common/autotest_common.sh@10 -- # set +x 00:29:03.023 11:23:23 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:29:03.023 11:23:23 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:29:03.023 11:23:23 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:29:03.023 11:23:23 -- common/autotest_common.sh@10 -- # set +x 00:29:07.222 INFO: APP EXITING 00:29:07.222 INFO: killing all VMs 00:29:07.222 INFO: killing vhost app 00:29:07.222 WARN: no vhost pid file found 00:29:07.222 INFO: EXIT DONE 00:29:09.760 Waiting for block devices as requested 00:29:09.760 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:10.019 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:10.019 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:10.019 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:10.019 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:10.279 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:10.279 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.279 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:10.279 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:10.538 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:10.538 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:10.538 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:10.538 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:10.797 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:10.797 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.797 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:11.056 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:15.277 Cleaning 00:29:15.277 Removing: /var/run/dpdk/spdk0/config 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:15.277 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:15.278 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:15.278 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:15.278 Removing: /var/run/dpdk/spdk1/config 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:15.278 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:15.278 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:15.278 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:15.278 Removing: /var/run/dpdk/spdk2/config 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:15.278 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:15.278 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:15.278 Removing: /var/run/dpdk/spdk3/config 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:15.278 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:15.278 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:15.278 Removing: /var/run/dpdk/spdk4/config 00:29:15.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:15.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:15.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:15.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:15.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:15.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:15.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:15.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:15.538 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:15.538 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:15.538 Removing: /dev/shm/bdevperf_trace.pid1607820 00:29:15.538 Removing: /dev/shm/bdevperf_trace.pid1703353 00:29:15.538 Removing: /dev/shm/bdev_svc_trace.1 00:29:15.538 Removing: /dev/shm/nvmf_trace.0 00:29:15.538 Removing: /dev/shm/spdk_tgt_trace.pid1439331 00:29:15.538 Removing: /var/run/dpdk/spdk0 00:29:15.538 Removing: /var/run/dpdk/spdk1 00:29:15.538 Removing: /var/run/dpdk/spdk2 00:29:15.538 Removing: /var/run/dpdk/spdk3 00:29:15.538 Removing: /var/run/dpdk/spdk4 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1436007 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1437597 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1439331 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1440088 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1445914 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1447648 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1448004 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1448426 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1448869 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1449220 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1449397 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1449563 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1449892 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1450969 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1454140 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1454439 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1454737 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1454998 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1455309 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1455575 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1456115 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1456151 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1456447 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1456707 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1456909 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1457020 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1457643 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1457836 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1458199 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1458426 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1458598 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1458657 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1458922 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1459209 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1459476 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1459759 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1460031 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1460314 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1460568 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1460811 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1461012 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1461250 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1461454 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1461716 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1461988 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1462276 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1462542 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1462821 00:29:15.538 Removing: /var/run/dpdk/spdk_pid1463095 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1463375 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1463639 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1463930 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1464197 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1464478 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1464711 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1464960 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1465167 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1465406 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1465611 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1465882 00:29:15.798 Removing: /var/run/dpdk/spdk_pid1466150 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1466434 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1466700 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1466982 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1467257 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1467539 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1467808 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1468099 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1468365 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1468649 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1468892 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1469151 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1469276 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1469620 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1473838 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1573878 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1577973 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1588919 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1594150 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1597697 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1598543 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1607820 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1608108 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1612224 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1618192 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1621551 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1631704 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1656598 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1660141 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1665451 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1701173 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1702129 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1703353 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1707565 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1714662 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1715660 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1716535 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1717597 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1718094 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1723107 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1723117 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1727754 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1728292 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1729007 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1729708 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1729874 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1733529 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1735478 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1737579 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1739465 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1741575 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1743441 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1750144 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1750618 00:29:15.799 Removing: /var/run/dpdk/spdk_pid1754051 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1755426 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1765827 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1768765 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1774450 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1774759 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1780788 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1781184 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1783262 00:29:16.059 Removing: /var/run/dpdk/spdk_pid1786586 00:29:16.059 Clean 00:29:16.059 killing process with pid 1382154 00:29:24.186 killing process with pid 1382151 00:29:24.186 killing process with pid 1382153 00:29:24.186 killing process with pid 1382152 00:29:24.186 11:23:43 -- common/autotest_common.sh@1446 -- # return 0 00:29:24.186 11:23:43 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:29:24.186 11:23:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.186 11:23:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.186 11:23:43 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:29:24.186 11:23:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.186 11:23:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.186 11:23:43 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:24.186 11:23:43 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:29:24.186 11:23:43 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:29:24.186 11:23:43 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:29:24.186 11:23:43 -- spdk/autotest.sh@383 -- # hostname 00:29:24.186 11:23:43 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-37 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:29:24.186 geninfo: WARNING: invalid characters removed from testname! 00:29:42.289 11:24:00 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:42.289 11:24:02 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:43.668 11:24:03 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:45.046 11:24:05 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:46.424 11:24:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:48.332 11:24:08 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:49.712 11:24:09 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:49.712 11:24:09 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:29:49.712 11:24:09 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:29:49.712 11:24:09 -- common/autotest_common.sh@1690 -- $ lcov --version 00:29:49.712 11:24:10 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:29:49.712 11:24:10 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:29:49.712 11:24:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:29:49.712 11:24:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:29:49.712 11:24:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:29:49.712 11:24:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:29:49.712 11:24:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:49.712 11:24:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:29:49.712 11:24:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:29:49.712 11:24:10 -- scripts/common.sh@339 -- $ ver1_l=2 00:29:49.712 11:24:10 -- scripts/common.sh@340 -- $ ver2_l=1 00:29:49.712 11:24:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:29:49.712 11:24:10 -- scripts/common.sh@343 -- $ case "$op" in 00:29:49.712 11:24:10 -- scripts/common.sh@344 -- $ : 1 00:29:49.712 11:24:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:29:49.712 11:24:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.712 11:24:10 -- scripts/common.sh@364 -- $ decimal 1 00:29:49.712 11:24:10 -- scripts/common.sh@352 -- $ local d=1 00:29:49.712 11:24:10 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:49.712 11:24:10 -- scripts/common.sh@354 -- $ echo 1 00:29:49.712 11:24:10 -- scripts/common.sh@364 -- $ ver1[v]=1 00:29:49.712 11:24:10 -- scripts/common.sh@365 -- $ decimal 2 00:29:49.712 11:24:10 -- scripts/common.sh@352 -- $ local d=2 00:29:49.712 11:24:10 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:49.712 11:24:10 -- scripts/common.sh@354 -- $ echo 2 00:29:49.712 11:24:10 -- scripts/common.sh@365 -- $ ver2[v]=2 00:29:49.712 11:24:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:29:49.712 11:24:10 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:29:49.712 11:24:10 -- scripts/common.sh@367 -- $ return 0 00:29:49.712 11:24:10 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.712 11:24:10 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:29:49.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.712 --rc genhtml_branch_coverage=1 00:29:49.712 --rc genhtml_function_coverage=1 00:29:49.712 --rc genhtml_legend=1 00:29:49.712 --rc geninfo_all_blocks=1 00:29:49.712 --rc geninfo_unexecuted_blocks=1 00:29:49.712 00:29:49.712 ' 00:29:49.712 11:24:10 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:29:49.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.712 --rc genhtml_branch_coverage=1 00:29:49.712 --rc genhtml_function_coverage=1 00:29:49.712 --rc genhtml_legend=1 00:29:49.712 --rc geninfo_all_blocks=1 00:29:49.712 --rc geninfo_unexecuted_blocks=1 00:29:49.712 00:29:49.712 ' 00:29:49.712 11:24:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:29:49.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.712 --rc genhtml_branch_coverage=1 00:29:49.712 --rc genhtml_function_coverage=1 00:29:49.712 --rc genhtml_legend=1 00:29:49.712 --rc geninfo_all_blocks=1 00:29:49.712 --rc geninfo_unexecuted_blocks=1 00:29:49.712 00:29:49.712 ' 00:29:49.712 11:24:10 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:29:49.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.712 --rc genhtml_branch_coverage=1 00:29:49.712 --rc genhtml_function_coverage=1 00:29:49.712 --rc genhtml_legend=1 00:29:49.712 --rc geninfo_all_blocks=1 00:29:49.712 --rc geninfo_unexecuted_blocks=1 00:29:49.712 00:29:49.712 ' 00:29:49.712 11:24:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:49.712 11:24:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:49.713 11:24:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.713 11:24:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.713 11:24:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.713 11:24:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.713 11:24:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.713 11:24:10 -- paths/export.sh@5 -- $ export PATH 00:29:49.713 11:24:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.713 11:24:10 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:29:49.713 11:24:10 -- common/autobuild_common.sh@440 -- $ date +%s 00:29:49.713 11:24:10 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734085450.XXXXXX 00:29:49.713 11:24:10 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734085450.9QlEg7 00:29:49.713 11:24:10 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:29:49.713 11:24:10 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:29:49.713 11:24:10 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:29:49.713 11:24:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:49.713 11:24:10 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:49.713 11:24:10 -- common/autobuild_common.sh@456 -- $ get_config_params 00:29:49.713 11:24:10 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:29:49.713 11:24:10 -- common/autotest_common.sh@10 -- $ set +x 00:29:49.713 11:24:10 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:29:49.713 11:24:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:29:49.713 11:24:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:49.713 11:24:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:49.713 11:24:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:49.713 11:24:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:49.713 11:24:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:49.713 11:24:10 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:49.713 11:24:10 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:49.713 11:24:10 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:49.713 11:24:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:49.713 + [[ -n 1339826 ]] 00:29:49.713 + sudo kill 1339826 00:29:49.724 [Pipeline] } 00:29:49.740 [Pipeline] // stage 00:29:49.746 [Pipeline] } 00:29:49.761 [Pipeline] // timeout 00:29:49.766 [Pipeline] } 00:29:49.781 [Pipeline] // catchError 00:29:49.786 [Pipeline] } 00:29:49.803 [Pipeline] // wrap 00:29:49.809 [Pipeline] } 00:29:49.823 [Pipeline] // catchError 00:29:49.833 [Pipeline] stage 00:29:49.836 [Pipeline] { (Epilogue) 00:29:49.850 [Pipeline] catchError 00:29:49.852 [Pipeline] { 00:29:49.866 [Pipeline] echo 00:29:49.868 Cleanup processes 00:29:49.875 [Pipeline] sh 00:29:50.163 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:50.163 1804487 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:50.178 [Pipeline] sh 00:29:50.564 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:50.564 ++ grep -v 'sudo pgrep' 00:29:50.564 ++ awk '{print $1}' 00:29:50.564 + sudo kill -9 00:29:50.564 + true 00:29:50.622 [Pipeline] sh 00:29:50.916 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:00.912 [Pipeline] sh 00:30:01.199 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:01.199 Artifacts sizes are good 00:30:01.215 [Pipeline] archiveArtifacts 00:30:01.223 Archiving artifacts 00:30:01.371 [Pipeline] sh 00:30:01.658 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:30:01.675 [Pipeline] cleanWs 00:30:01.685 [WS-CLEANUP] Deleting project workspace... 00:30:01.685 [WS-CLEANUP] Deferred wipeout is used... 00:30:01.692 [WS-CLEANUP] done 00:30:01.694 [Pipeline] } 00:30:01.709 [Pipeline] // catchError 00:30:01.723 [Pipeline] sh 00:30:02.005 + logger -p user.info -t JENKINS-CI 00:30:02.014 [Pipeline] } 00:30:02.026 [Pipeline] // stage 00:30:02.032 [Pipeline] } 00:30:02.045 [Pipeline] // node 00:30:02.049 [Pipeline] End of Pipeline 00:30:02.095 Finished: SUCCESS